Objective

Today, the development landscape has switched from knowing how to code, to prompt engineering and producing the code, that can meet the business needs. IMO, this is good to get started with your MVP 1.0, beyond which developers should know how to validate, test, promote the code. Additionally, they should have a great handle on how to troubleshoot and extend the code to meet the business needs.

For now, we will leverage Opencode cli to connect with ollama and use qwen3-coder to generate code based on prompts.

My goal is to walk you through all the steps involved in generating code using these tools, and maintaining data privacy too.

Setup Ollama

  • To setup ollama, I use my Raspberry Pi 5 (16GB), that is running Ubuntu 24.04 LTS.

  • On the machine, execute the following script to install ollama

    curl -fsSL https://ollama.com/install.sh sh
  • Next edit the service to access ollama externally from the machine

    sudo systemctl edit ollama.service

  • Add the following before the line ### Lines below this comment will be discarded
      [Service]
      Environment="OLLAMA_HOST=0.0.0.0:11434"
      Environment="OLLAMA_ORIGINS=*"
    
  • Finally restart the ollama service

    sudo systemctl restart ollama.service

  • Verify the connectivity to the ollama server from your local machine. Replace <OLLAMA_HOST> with your host.

      nc -vz <OLLAMA_HOST> 11434
      Connection to <OLLAMA_HOST> port 11434 [tcp/*] succeeded!
    
  • Once ollama is setup, you can pull down the relevant models.

  • Here is the complete list of models I use, but you can chose to pull only the specific ones that you wish to use.
ollama pull qwen3-embedding:latest
ollama pull qwen3-coder-next:cloud

NOTE: The model should support tools, if it doesn’t then the code generation wouldn’t work.

Setup opencode cli

  • Setting up opencode cli is pretty straightforward
    curl -fsSL https://opencode.ai/install bash
  • To configure the model that opencode can use along with the default ones, start with editing the config file

    vi ~/.config/opencode/opencode.json

  • Next add the following contents into it

      {
          "$schema": "https://opencode.ai/config.json",
          "provider": {
              "ollama": {
              "models": {
                  "qwen3-coder-next:cloud": {
                  "_launch": true,
                  "limit": {
                      "context": 262144,
                      "output": 32768
                  },
                  "name": "qwen3-coder-next:cloud"
                  },
                  "qwen3.5:latest": {
                  "_launch": true,
                  "limit": {
                      "context": 262144,
                      "output": 32768
                  },
                  "name": "qwen3.5:latest"
                  }
              },
              "name": "Ollama",
              "npm": "@ai-sdk/openai-compatible",
              "options": {
                  "baseURL": "http://<OLLAMA_HOST>:11434/v1"
              }
              }
          }
      }
    
  • Now fire up the opencode cli from your terminal, and press ctrl + p > Switch Model > qwen3-coder-next:cloud Ollama. Ensure you select the desired model.

  • Finally get coding

Code generating tips

  • Start with asking Opencode to generate a prompt for your idea. Think about your idea and ask opencode a simple question
      Provide me a prompt where I want to generate helm charts that I would host on my github.com/rahulkj/charts site. This helm chart would allow users to deploy an application to kubernetes. In this deployment, the following resources need to be provisioned, ingress gateway, service, pod, probes, pvcs, etc
    
  • It would take a few secs to finally produce an output. Example output
      Here's a prompt you can use:
      Generate a Helm chart for deploying an application to Kubernetes. The chart should be hosted on github.com/rahulkj/charts and include:
    
      -/values.yaml with configurable values
      -Chart.yaml with metadata
      -templates/ directory with:
          -Ingress gateway configuration
          -Service (ClusterIP/NodePort/LoadBalancer)
          -Deployment with configurable replicas
          -Pod security context and resource limits
          -Readiness and liveness probes
          -PersistentVolumeClaim for storage
          -ConfigMap for application configuration
          -Secret for sensitive data
      -Use Helm best practices and naming conventions
      -Include helper templates for common names
      -Add documentation in README.md
      -Use  for all configurable parameters
      -Include proper templating with
    
      Would you like me to generate this chart for you?
    
  • Now use this prompt if you wish to continue or change it.

CATCH

  • Nothing is absolutely free, so at some point if you exhaust your tokens, you might hit the github rate limits, so be prepared for it, and this is when you can swtich to the free model from Opencode.ai, which is Minimax.

Enjoy coding and keep sharing!!