Recipes
A cookbook of patterns the embedding surface supports. Each recipe is a runnable snippet plus the rationale.
Capture stdout
package main
import (
"bytes"
"fmt"
"github.com/tamnd/gopy/objects"
"github.com/tamnd/gopy/pythonrun"
"github.com/tamnd/gopy/state"
_ "github.com/tamnd/gopy/stdlibinit"
)
func main() {
ts := state.NewThread()
var buf bytes.Buffer
src := `for i in range(3): print(i, i*i)`
if err := pythonrun.RunSimpleString(ts, src, objects.NewDict(), &buf); err != nil {
panic(err)
}
fmt.Print(buf.String())
}
Pass any io.Writer as the fourth argument. io.Discard to
silence; os.Stdout to pass through; a bytes.Buffer to
capture; a custom io.Writer that trims or transforms.
Pre-populate globals from Go
globals := objects.NewDict()
globals.SetItemString("user_id", objects.NewInt(42))
globals.SetItemString("flags", objects.NewList())
src := `
print(f"user {user_id}")
flags.append("seen")
`
pythonrun.RunSimpleString(ts, src, globals, os.Stdout)
// flags now contains the appended value, observable from Go.
The dict you pass is the module's globals. Anything you store in it is visible to the script; anything the script assigns at module scope (a function, a class, a variable) lands back in the same dict. This is the cleanest channel between Go and Python.
Evaluate a single expression
src := `2 ** 64 - 1`
mod, _ := parser.ParseString(src, "<expr>", parser.ModeEval)
code, _ := compile.Compile(mod, "<expr>", 0)
result, _ := pythonrun.RunCode(ts, code, globals)
fmt.Println(result) // 18446744073709551615
ModeEval parses a single expression and produces a code
object whose result is the expression's value. Useful for
calculators, rule evaluators, or anywhere a user supplies a
formula.
Compile once, call many
src := `
def score(items):
return sum(x * x for x in items)
`
mod, _ := parser.ParseString(src, "lib", parser.ModeFile)
code, _ := compile.Compile(mod, "lib", 0)
globals := objects.NewDict()
pythonrun.RunCode(ts, code, globals)
scoreFn, _ := globals.GetItemString("score")
// Call many times.
for _, batch := range batches {
args := []objects.Object{toPyList(batch)}
result, _ := scoreFn.Call(ts, args, nil)
fmt.Println(result)
}
Parsing and compiling Python is the slow part of the pipeline. Once you have a code object, hand it to the VM as many times as you need.
Run user code with a curated set of built-ins
// Build a sandbox builtins dict.
safeBuiltins := objects.NewDict()
for _, name := range []string{"abs", "len", "max", "min", "range", "sum", "print"} {
if fn, _ := imp.GetBuiltin(name); fn != nil {
safeBuiltins.SetItemString(name, fn)
}
}
globals := objects.NewDict()
globals.SetItemString("__builtins__", safeBuiltins)
userSrc := `print(sum(range(10)))`
err := pythonrun.RunSimpleString(ts, userSrc, globals, &output)
The script can call abs, len, max, min, range, sum,
and print. It cannot import, cannot open, cannot reach
the host's environment. See
Embedding -> Sandboxing
for the full pattern.
Time-bound execution
done := make(chan error, 1)
go func() {
done <- pythonrun.RunSimpleString(ts, userSrc, globals, &output)
}()
select {
case err := <-done:
return err
case <-time.After(500 * time.Millisecond):
ts.RequestStop() // sets the eval-breaker bit
return errors.New("user code timed out")
}
ts.RequestStop() flips the per-thread eval-breaker bit; the VM
notices on the next backward branch or function call and raises
KeyboardInterrupt. The pattern is what production sandboxes
use to bound user compute.
Hot reload a script
// Watch a file, recompile on change.
fsnotifier := fsnotify.NewWatcher()
fsnotifier.Add("rules.py")
var current *objects.Object
recompile := func() {
src, _ := os.ReadFile("rules.py")
mod, _ := parser.ParseString(string(src), "rules.py", parser.ModeFile)
code, _ := compile.Compile(mod, "rules.py", 0)
g := objects.NewDict()
pythonrun.RunCode(ts, code, g)
fn, _ := g.GetItemString("evaluate")
current = &fn
}
recompile()
for ev := range fsnotifier.Events {
if ev.Op&fsnotify.Write != 0 {
recompile()
}
}
Compile inside the watcher callback; swap the function pointer under a lock; serve traffic from the current pointer.
Goroutines and Threads
Each goroutine that calls into the VM needs its own
state.Thread. The GIL takes care of serialisation:
var wg sync.WaitGroup
for i := 0; i < 8; i++ {
wg.Add(1)
go func(i int) {
defer wg.Done()
ts := state.NewThread()
src := fmt.Sprintf(`print("worker %d done")`, i)
pythonrun.RunSimpleString(ts, src, objects.NewDict(), os.Stdout)
}(i)
}
wg.Wait()
For true parallel Python, the per-interpreter GIL (PEP 684) plumbing is on the roadmap. Today the workers share a GIL and take turns.
Round-trip JSON between Go and Python
// Go -> Python via JSON.
payload, _ := json.Marshal(map[string]any{"users": []string{"a", "b"}})
src := fmt.Sprintf(`
import _json
data = _json.loads(%q)
print(data["users"])
`, string(payload))
pythonrun.RunSimpleString(ts, src, objects.NewDict(), &output)
// Python -> Go via JSON.
globals := objects.NewDict()
pythonrun.RunSimpleString(ts, `
import _json
result = _json.dumps({"answer": 42})
`, globals, io.Discard)
resultObj, _ := globals.GetItemString("result")
var decoded any
json.Unmarshal([]byte(resultObj.(*objects.Str).String()), &decoded)
JSON is the lowest-effort bridge when both sides have heterogeneous data and you do not want to write a custom marshaller.