NVIDIA: Llama 3.1 Nemotron Ultra 253B v1 reasoning
Llama-3.1-Nemotron-Ultra-253B-v1 is a large language model (LLM) optimized for advanced reasoning, human-interactive chat, retrieval-augmented generation (RAG), and tool-calling tasks. Derived from Meta’s Llama-3.1-405B-Instruct, it has been significantly customized using Neural Architecture Search (NAS), resulting in enhanced efficiency, reduced memory usage, and improved inference latency. The model supports a context length of up to 128K tokens and can operate efficiently on an 8x NVIDIA H100 node.
Note: you must include `detailed thinking on` in the system prompt to enable reasoning. Please see Usage Recommendations for more.
Capabilities
Context Window 131k tokens
Max Output 0 tokens
Inputs
Outputs
Pricing (per 1M tokens)
Input $0.60
Output $1.80
Cache Read -
Cache Write -
Supported Parameters
frequency_penaltyinclude_reasoningmax_tokenspresence_penaltyreasoningrepetition_penaltyresponse_formatstructured_outputstemperaturetop_ktop_p