Running local models on Macs gets faster with Ollama’s MLX support - Ars Technica
LLMs Running local models on Macs gets faster with Ollama’s MLX support Apple Silicon Macs get a performance boost thanks to better unified memory usage. 64 A graphic made by Ollama to announce MLX support.