Files
ollama/llm
Jesse Gross fdb109469f llm: Allow overriding flash attention setting
As we automatically enable flash attention for more models, there
are likely some cases where we get it wrong. This allows setting
OLLAMA_FLASH_ATTENTION=0 to disable it, even for models that usually
have flash attention.
2025-10-02 12:07:20 -07:00
..
2025-05-05 11:08:12 -07:00
2024-11-19 16:26:57 -08:00