Using Mistral API seems broken on cloudron
-
I get an odd error that says Mistral API is malformed but it works if I install open webui locally via docker.
May 29 11:14:06 ERROR:apps.openai.main:400 Client Error: Bad Request for url: https://api.mistral.ai/v1/chat/completions <30>1 2024-05-29T16:14:06Z iron 85872407-1b89-46c1-ae6f-868b66ed9a71 950 85872407-1b89-46c1-ae6f-868b66ed9a71 - Traceback (most recent call last): <30>1 2024-05-29T16:14:06Z iron 85872407-1b89-46c1-ae6f-868b66ed9a71 950 85872407-1b89-46c1-ae6f-868b66ed9a71 - File "/app/code/backend/apps/openai/main.py", line 361, in proxy <30>1 2024-05-29T16:14:06Z iron 85872407-1b89-46c1-ae6f-868b66ed9a71 950 85872407-1b89-46c1-ae6f-868b66ed9a71 - r.raise_for_status() <30>1 2024-05-29T16:14:06Z iron 85872407-1b89-46c1-ae6f-868b66ed9a71 950 85872407-1b89-46c1-ae6f-868b66ed9a71 - File "/usr/local/lib/python3.10/dist-packages/requests/models.py", line 1021, in raise_for_status <30>1 2024-05-29T16:14:06Z iron 85872407-1b89-46c1-ae6f-868b66ed9a71 950 85872407-1b89-46c1-ae6f-868b66ed9a71 - raise HTTPError(http_error_msg, response=self) May 29 11:14:06 requests.exceptions.HTTPError: 400 Client Error: Bad Request for url: https://api.mistral.ai/v1/chat/completions May 29 11:14:06 INFO: 75.19.98.88:0 - "POST /openai/api/chat/completions HTTP/1.1" 500 Internal Server Error -
the more i look into this I think its on the Mistral API side. Looks like they are not supporting openAI' API structure.
-
the more i look into this I think its on the Mistral API side. Looks like they are not supporting openAI' API structure.
@CptPlastic You might want to try via Open Router
Cause, it work well for me
-
This is the log after initiating a chat message:
INFO [apps.ollama.main] url: http://127.0.0.1:11434 time=2024-07-17T12:14:03.637Z level=INFO source=memory.go:309 msg="offload to cpu" layers.requested=-1 layers.model=33 layers.offload=0 layers.split="" memory.available="[9.3 GiB]" memory.required.full="5.5 GiB" memory.required.partial="0 B" memory.required.kv="1.0 GiB" memory.required.allocations="[5.5 GiB]" memory.weights.total="4.7 GiB" memory.weights.repeating="4.6 GiB" memory.weights.nonrepeating="105.0 MiB" memory.graph.full="560.0 MiB" memory.graph.partial="585.0 MiB" time=2024-07-17T12:14:03.638Z level=INFO source=server.go:383 msg="starting llama server" cmd="/tmp/ollama495631578/runners/cpu_avx2/ollama_llama_server --model /media/Nextcloud/openwebui_models/models/blobs/sha256-ff82381e2bea77d91c1b824c7afb83f6fb73e9f7de9dda631bcdbca564aa5435 --ctx-size 8192 --batch-size 512 --embedding --log-disable --no-mmap --parallel 4 --port 36195" time=2024-07-17T12:14:03.639Z level=INFO source=sched.go:437 msg="loaded runners" count=1 time=2024-07-17T12:14:03.639Z level=INFO source=server.go:571 msg="waiting for llama runner to start responding" time=2024-07-17T12:14:03.641Z level=INFO source=server.go:612 msg="waiting for server to become available" status="llm server error" INFO [main] build info | build=1 commit="a8db2a9" tid="140424338167680" timestamp=1721218443 INFO [main] system info | n_threads=8 n_threads_batch=-1 system_info="AVX = 1 | AVX_VNNI = 0 | AVX2 = 1 | AVX512 = 0 | AVX512_VBMI = 0 | AVX512_VNNI = 0 | AVX512_BF16 = 0 | FMA = 1 | NEON = 0 | SVE = 0 | ARM_FMA = 0 | F16C = 1 | FP16_VA = 0 | WASM_SIMD = 0 | BLAS = 0 | SSE3 = 1 | SSSE3 = 1 | VSX = 0 | MATMUL_INT8 = 0 | LLAMAFILE = 0 | " tid="140424338167680" timestamp=1721218443 total_threads=8 INFO [main] HTTP server listening | hostname="127.0.0.1" n_threads_http="7" port="36195" tid="140424338167680" timestamp=1721218443 llama_model_loader: loaded meta data with 25 key-value pairs and 291 tensors from /media/Nextcloud/openwebui_models/models/blobs/sha256-ff82381e2bea77d91c1b824c7afb83f6fb73e9f7de9dda631bcdbca564aa5435 (version GGUF V3 (latest)) llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output. llama_model_loader: - kv 0: general.architecture str = llama llama_model_loader: - kv 1: general.name str = Mistral-7B-Instruct-v0.3 llama_model_loader: - kv 2: llama.block_count u32 = 32 llama_model_loader: - kv 3: llama.context_length u32 = 32768 llama_model_loader: - kv 4: llama.embedding_length u32 = 4096 llama_model_loader: - kv 5: llama.feed_forward_length u32 = 14336 llama_model_loader: - kv 6: llama.attention.head_count u32 = 32 llama_model_loader: - kv 7: llama.attention.head_count_kv u32 = 8 llama_model_loader: - kv 8: llama.rope.freq_base f32 = 1000000.000000 llama_model_loader: - kv 9: llama.attention.layer_norm_rms_epsilon f32 = 0.000010 llama_model_loader: - kv 10: general.file_type u32 = 2 llama_model_loader: - kv 11: llama.vocab_size u32 = 32768 llama_model_loader: - kv 12: llama.rope.dimension_count u32 = 128 llama_model_loader: - kv 13: tokenizer.ggml.model str = llama llama_model_loader: - kv 14: tokenizer.ggml.pre str = default llama_model_loader: - kv 15: tokenizer.ggml.tokens arr[str,32768] = ["<unk>", "<s>", "</s>", "[INST]", "[... llama_model_loader: - kv 16: tokenizer.ggml.scores arr[f32,32768] = [0.000000, 0.000000, 0.000000, 0.0000... llama_model_loader: - kv 17: tokenizer.ggml.token_type arr[i32,32768] = [2, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, ... llama_model_loader: - kv 18: tokenizer.ggml.bos_token_id u32 = 1 llama_model_loader: - kv 19: tokenizer.ggml.eos_token_id u32 = 2 llama_model_loader: - kv 20: tokenizer.ggml.unknown_token_id u32 = 0 llama_model_loader: - kv 21: tokenizer.ggml.add_bos_token bool = true llama_model_loader: - kv 22: tokenizer.ggml.add_eos_token bool = false llama_model_loader: - kv 23: tokenizer.chat_template str = {{ bos_token }}{% for message in mess... llama_model_loader: - kv 24: general.quantization_version u32 = 2 llama_model_loader: - type f32: 65 tensors llama_model_loader: - type q4_0: 225 tensors llama_model_loader: - type q6_K: 1 tensors llm_load_vocab: special tokens cache size = 1027 llm_load_vocab: token to piece cache size = 0.1731 MB llm_load_print_meta: format = GGUF V3 (latest) llm_load_print_meta: arch = llama llm_load_print_meta: vocab type = SPM llm_load_print_meta: n_vocab = 32768 llm_load_print_meta: n_merges = 0 llm_load_print_meta: vocab_only = 0 llm_load_print_meta: n_ctx_train = 32768 llm_load_print_meta: n_embd = 4096 llm_load_print_meta: n_layer = 32 llm_load_print_meta: n_head = 32 llm_load_print_meta: n_head_kv = 8 llm_load_print_meta: n_rot = 128 llm_load_print_meta: n_swa = 0 llm_load_print_meta: n_embd_head_k = 128 llm_load_print_meta: n_embd_head_v = 128 llm_load_print_meta: n_gqa = 4 llm_load_print_meta: n_embd_k_gqa = 1024 llm_load_print_meta: n_embd_v_gqa = 1024 llm_load_print_meta: f_norm_eps = 0.0e+00 llm_load_print_meta: f_norm_rms_eps = 1.0e-05 llm_load_print_meta: f_clamp_kqv = 0.0e+00 llm_load_print_meta: f_max_alibi_bias = 0.0e+00 llm_load_print_meta: f_logit_scale = 0.0e+00 llm_load_print_meta: n_ff = 14336 llm_load_print_meta: n_expert = 0 llm_load_print_meta: n_expert_used = 0 llm_load_print_meta: causal attn = 1 llm_load_print_meta: pooling type = 0 llm_load_print_meta: rope type = 0 llm_load_print_meta: rope scaling = linear llm_load_print_meta: freq_base_train = 1000000.0 llm_load_print_meta: freq_scale_train = 1 llm_load_print_meta: n_ctx_orig_yarn = 32768 llm_load_print_meta: rope_finetuned = unknown llm_load_print_meta: ssm_d_conv = 0 llm_load_print_meta: ssm_d_inner = 0 llm_load_print_meta: ssm_d_state = 0 llm_load_print_meta: ssm_dt_rank = 0 llm_load_print_meta: model type = 7B llm_load_print_meta: model ftype = Q4_0 llm_load_print_meta: model params = 7.25 B llm_load_print_meta: model size = 3.83 GiB (4.54 BPW) llm_load_print_meta: general.name = Mistral-7B-Instruct-v0.3 llm_load_print_meta: BOS token = 1 '<s>' llm_load_print_meta: EOS token = 2 '</s>' llm_load_print_meta: UNK token = 0 '<unk>' llm_load_print_meta: LF token = 781 '<0x0A>' llm_load_print_meta: max token length = 48 llm_load_tensors: ggml ctx size = 0.14 MiB llm_load_tensors: CPU buffer size = 3922.02 MiB time=2024-07-17T12:14:03.893Z level=INFO source=server.go:612 msg="waiting for server to become available" status="llm server loading model" time=2024-07-17T12:14:09.958Z level=INFO source=server.go:612 msg="waiting for server to become available" status="llm server error" INFO: 172.18.0.1:41808 - "GET /health HTTP/1.1" 200 OK time=2024-07-17T12:14:10.211Z level=ERROR source=sched.go:443 msg="error loading llama server" error="llama runner process has terminated: signal: killed " [GIN] 2024/07/17 - 12:14:10 | 500 | 6.599245043s | 127.0.0.1 | POST "/v1/chat/completions" INFO: 122.180.29.153:0 - "POST /ollama/v1/chat/completions HTTP/1.1" 500 Internal Server Error -
signal: killedmight indicate that the container or the whole system just runs out of memory? -
N nebulon marked this topic as a question on
-
N nebulon has marked this topic as solved on
-
@nebulon Thanks!
Indeed, that was it.RAM was set at 2GB. Whereas, it seems to require a min. of 5.5GB to function.
@shrey said in Using Mistral API seems broken on cloudron:
@nebulon Thanks!
Indeed, that was it.RAM was set at 2GB. Whereas, it seems to require a min. of 5.5GB to function.
That's right, and if you can provide it with at least 8-16GB of RAM you will see a huge difference. These things are meant to consume large resources for now, but it's getting better.
Hello! It looks like you're interested in this conversation, but you don't have an account yet.
Getting fed up of having to scroll through the same posts each visit? When you register for an account, you'll always come back to exactly where you were before, and choose to be notified of new replies (either via email, or push notification). You'll also be able to save bookmarks and upvote posts to show your appreciation to other community members.
With your input, this post could be even better 💗
Register Login