Accessing LLMs over CLI

Nov 13, 2024

I recently discovered Simon Willison's llm project, a Python CLI app that allows you to access LLMs over the command line. Since I started using it, it has made use LLMs and evaluate their usefullness on an almost daily basis. Here are some of the way I'm using it.

LLM plugins

llm on its own only supports ChatGPT by OpenAI. However, it's built around usage of plugins to extend its feature set. Simon seems to have gone all in on plugins and I agree with their points about plugins being a useful way to extend free software capabilities. Here's a quote from Simon:

The great thing about plugins is it's a way of building an open source project where you don't have to review people's code to add features to your thing.

and a recent talk of theirs about extending software with plugins.

The two family of models I'm using are Llama which I use Ollama to run locally and Claude by Anthropic (a proprietary model). In order to use them, I've had to installthe llm-ollama and llm-claude-3 plugins. Note, since I'm using NixOS and I like to manage my packages declaritevely with it, I've opened a PR to packge llm-ollama and I'm also trying to update various llm-claude-3 related PRs.

System prompts

Tags: llm cli