How to install Headless LM Studio on Linux

Simply run curl -fsSL https://lmstudio.ai/install.sh | bash

After you install, don’t forget to source your .bashrc file via source .bashrc so that you can run the lms command.

To bring it up, run lms daemon up

After it’s up, you can download a model using lms get, like this:

lms get MODEL_LINK@MODEL_QUANTIZATION

e.g.

lms get https://lmstudio.ai/models/nvidia/nemotron-3-nano-omni@q8_0

To load a model into memory:

lms load "nemotron-3-nano-omni" --context-length 128000

To start the server:

lms server start --port 1234

That’s it, Enjoy!

Sources:

Posted in Linux | Tagged , , , , | Leave a comment

How to Configure Tavily MCP Server in LM Studio

I needed to add a web search functionality to my models in LM Studio, so I simply had to add the Tavily MCP Server configuration to LM studio, along with my API key of course.

You simply click on Edit msp.json after clicking + Install on the top right-hand side:

then replace the values there with the following:

{
"mcpServers": {
"tavily-remote": {
"command": "npx",
"args": [
"-y",
"mcp-remote",
"https://mcp.tavily.com/mcp/?tavilyApiKey=tvly-prod-THE_REST_OF_YOUR_API_KEY"
]
}
}
}

Click on Save and you are good to go!

Don’t forget to enable it :), Enjoy!

Source:

Posted in Linux | Tagged , , , | Leave a comment

Count JSON objects in a file using jq

Simply run:

cat input_file.json | jq 'length'

For example:

{
"entity_id_01": {
"field_a": "value_a",
"field_b": "value_b",
"field_c": "This is a generic feedback message"
},
"entity_id_02": {
"field_a": "value_a",
"field_b": "value_b",
"field_c": "This is a generic feedback message"
}
}

Will return 2!

That’s it, Enjoy!

Sources:

Posted in Linux | Tagged , | Leave a comment

Enable debug mode for Ollama on Linux/mac

Simply run it as follows:

OLLAMA_DEBUG=1 ollama serve

Note: You might want to export another variable to show the ollama serve command where the models are saved in case you receive a 404 about model not available, this happened to me for some reason on Linux only.

In this case, just export these two variables and then run the command:

export OLLAMA_MODELS=/usr/share/ollama/.ollama/models
export OLLAMA_DEBUG=1
ollama serve

That’s it, Enjoy!

Sources

Posted in Ollama | Tagged , | Leave a comment

Build & Run LibreTranslate Docker Image for Offline Work

The documentation on https://hub.docker.com/r/libretranslate/libretranslate was a bit misleading, so here are the proper steps:

Building

git clone https://github.com/uav4geo/LibreTranslate
cd LibreTranslate
docker build --no-cache --progress=plain -f docker/Dockerfile --build-arg with_models=true -t libretranslate .

The above will take some time to build and download all the dictionaries for offline use.

Running

docker run -it -p 5000:5000 libretranslate

That’s it, Enjoy!

Sources

Posted in docker | Tagged , | Leave a comment