Llama 3.1 70B Latency Optimized
Llama 3.1 70B Latency Optimized is a reasoning model created by Meta. It is served by AWS Bedrock.
Self-hostableYes
Reasoning TierYes
Llama 3.1 70B Latency Optimized Tokenizer
Token count: 0
Llama 3.1 70B Latency Optimized Pricing
$0.00
Input: $0.00·Output: $0.00