@robi Thanks—yes, I uninstalled it.
ntnsndr
Posts
-
README files triggering Hostinger malware scanner -
README files triggering Hostinger malware scanner@joseph Hostinger's responses was like "We're not responsible for our third-party scanning tool."
And @BrutalBirdie I guess it's good that the free version of Monarx at least can't (or supposedly doesn't) delete files autonomously?
-
README files triggering Hostinger malware scannerThanks, @BrutalBirdie @nebulon, I appreciate your feedback. @nebulon are you implying this is a security concern? I share that and might just turn off the malware scanner.
-
README files triggering Hostinger malware scannerI've got my Cloudron running on a Hostinger VPS, and it has started getting the attention of Hostinger's built-in malware scanner.
The files in question appear to be node-related README files on various apps. For instance, stuff like:
/var/lib/docker/overlay2/126fc8372bdf65b1c50de1d1b818c4b69d05786ede10742be2ad17c2167cff23/diff/app/code/node_modules/devalue/README.md /var/lib/docker/overlay2/01a2a6266e84fe08aa03dcf6a3e2c43c48eefef48590ae04c84af4eae316a261/merged/app/code/node_modules/devalue/README.md
I am not sure why they are being flagged. Can anyone confirm that I should not be concerned about this? Or otherwise?
Thanks.
-
Ollama: permissions issue when using volume storageQuestion: What should be the correct directory ownership and permissions setting for the
ollama-home
folder on a mounted Volume? -
Ollama: permissions issue when using volume storageAs I said above, I'm using the Filesystem mode for the volume. Should I use another?
-
Ollama: permissions issue when using volume storage@BrutalBirdie yes, as noted above: "The problem does not occur under the default app settings. But it does when I use the recommended path of creating a separate volume for the models."
-
Ollama: permissions issue when using volume storageRelatedly (I am trying to try again with another volume), does anyone see if I'm doing something wrong here?
To create the
/media/ollama-vol
directory before I had tomkdir
it manually before adding the volume.Could there be something screwy with my filesystem permissions in general?
-
Ollama: permissions issue when using volume storage@joseph I was only try to run ollama manually to understand why it is not starting on its own when the app boots.
So, when I reboot the app with the
/media/ollama-vol
directory at 777 permissions, here is what I get:Apr 15 08:31:05 2025-04-15 14:31:05,744 CRIT Supervisor is running as root. Privileges were not dropped because no user is specified in the config file. If you intend to run as root, you can set user=root in the config file to avoid this message. Apr 15 08:31:05 2025-04-15 14:31:05,744 INFO Included extra file "/etc/supervisor/conf.d/ollama.conf" during parsing Apr 15 08:31:05 2025-04-15 14:31:05,744 INFO Included extra file "/etc/supervisor/conf.d/openwebui.conf" during parsing Apr 15 08:31:05 2025-04-15 14:31:05,748 INFO RPC interface 'supervisor' initialized Apr 15 08:31:05 2025-04-15 14:31:05,748 CRIT Server 'unix_http_server' running without any HTTP authentication checking Apr 15 08:31:05 2025-04-15 14:31:05,749 INFO supervisord started with pid 1 Apr 15 08:31:06 2025-04-15 14:31:06,752 INFO spawned: 'ollama' with pid 25 Apr 15 08:31:06 2025-04-15 14:31:06,755 INFO spawned: 'openwebui' with pid 26 Apr 15 08:31:06 2025/04/15 14:31:06 routes.go:1231: INFO server config env="map[CUDA_VISIBLE_DEVICES: GPU_DEVICE_ORDINAL: HIP_VISIBLE_DEVICES: HSA_OVERRIDE_GFX_VERSION: HTTPS_PROXY: HTTP_PROXY: NO_PROXY: OLLAMA_CONTEXT_LENGTH:2048 OLLAMA_DEBUG:false OLLAMA_FLASH_ATTENTION:false OLLAMA_GPU_OVERHEAD:0 OLLAMA_HOST:http://127.0.0.1:11434 OLLAMA_INTEL_GPU:false OLLAMA_KEEP_ALIVE:5m0s OLLAMA_KV_CACHE_TYPE: OLLAMA_LLM_LIBRARY: OLLAMA_LOAD_TIMEOUT:5m0s OLLAMA_MAX_LOADED_MODELS:0 OLLAMA_MAX_QUEUE:512 OLLAMA_MODELS:/media/ollama-vol/models OLLAMA_MULTIUSER_CACHE:false OLLAMA_NEW_ENGINE:false OLLAMA_NOHISTORY:false OLLAMA_NOPRUNE:false OLLAMA_NUM_PARALLEL:0 OLLAMA_ORIGINS:[http://localhost https://localhost http://localhost:* https://localhost:* http://127.0.0.1 https://127.0.0.1 http://127.0.0.1:* https://127.0.0.1:* http://0.0.0.0 https://0.0.0.0 http://0.0.0.0:* https://0.0.0.0:* app://* file://* tauri://* vscode-webview://* vscode-file://*] OLLAMA_SCHED_SPREAD:false ROCR_VISIBLE_DEVICES: http_proxy: https_proxy: no_proxy:]" Apr 15 08:31:06 Error: mkdir /media/ollama-vol/models/blobs: permission denied Apr 15 08:31:06 2025-04-15 14:31:06,790 WARN exited: ollama (exit status 1; not expected) Apr 15 08:31:07 2025-04-15 14:31:07,792 INFO spawned: 'ollama' with pid 33 Apr 15 08:31:07 2025-04-15 14:31:07,793 INFO success: openwebui entered RUNNING state, process has stayed up for > than 1 seconds (startsecs) Apr 15 08:31:07 2025/04/15 14:31:07 routes.go:1231: INFO server config env="map[CUDA_VISIBLE_DEVICES: GPU_DEVICE_ORDINAL: HIP_VISIBLE_DEVICES: HSA_OVERRIDE_GFX_VERSION: HTTPS_PROXY: HTTP_PROXY: NO_PROXY: OLLAMA_CONTEXT_LENGTH:2048 OLLAMA_DEBUG:false OLLAMA_FLASH_ATTENTION:false OLLAMA_GPU_OVERHEAD:0 OLLAMA_HOST:http://127.0.0.1:11434 OLLAMA_INTEL_GPU:false OLLAMA_KEEP_ALIVE:5m0s OLLAMA_KV_CACHE_TYPE: OLLAMA_LLM_LIBRARY: OLLAMA_LOAD_TIMEOUT:5m0s OLLAMA_MAX_LOADED_MODELS:0 OLLAMA_MAX_QUEUE:512 OLLAMA_MODELS:/media/ollama-vol/models OLLAMA_MULTIUSER_CACHE:false OLLAMA_NEW_ENGINE:false OLLAMA_NOHISTORY:false OLLAMA_NOPRUNE:false OLLAMA_NUM_PARALLEL:0 OLLAMA_ORIGINS:[http://localhost https://localhost http://localhost:* https://localhost:* http://127.0.0.1 https://127.0.0.1 http://127.0.0.1:* https://127.0.0.1:* http://0.0.0.0 https://0.0.0.0 http://0.0.0.0:* https://0.0.0.0:* app://* file://* tauri://* vscode-webview://* vscode-file://*] OLLAMA_SCHED_SPREAD:false ROCR_VISIBLE_DEVICES: http_proxy: https_proxy: no_proxy:]" Apr 15 08:31:07 Error: mkdir /media/ollama-vol/models/blobs: permission denied Apr 15 08:31:07 2025-04-15 14:31:07,823 WARN exited: ollama (exit status 1; not expected) Apr 15 08:31:09 2025-04-15 14:31:09,828 INFO spawned: 'ollama' with pid 43 Apr 15 08:31:09 2025/04/15 14:31:09 routes.go:1231: INFO server config env="map[CUDA_VISIBLE_DEVICES: GPU_DEVICE_ORDINAL: HIP_VISIBLE_DEVICES: HSA_OVERRIDE_GFX_VERSION: HTTPS_PROXY: HTTP_PROXY: NO_PROXY: OLLAMA_CONTEXT_LENGTH:2048 OLLAMA_DEBUG:false OLLAMA_FLASH_ATTENTION:false OLLAMA_GPU_OVERHEAD:0 OLLAMA_HOST:http://127.0.0.1:11434 OLLAMA_INTEL_GPU:false OLLAMA_KEEP_ALIVE:5m0s OLLAMA_KV_CACHE_TYPE: OLLAMA_LLM_LIBRARY: OLLAMA_LOAD_TIMEOUT:5m0s OLLAMA_MAX_LOADED_MODELS:0 OLLAMA_MAX_QUEUE:512 OLLAMA_MODELS:/media/ollama-vol/models OLLAMA_MULTIUSER_CACHE:false OLLAMA_NEW_ENGINE:false OLLAMA_NOHISTORY:false OLLAMA_NOPRUNE:false OLLAMA_NUM_PARALLEL:0 OLLAMA_ORIGINS:[http://localhost https://localhost http://localhost:* https://localhost:* http://127.0.0.1 https://127.0.0.1 http://127.0.0.1:* https://127.0.0.1:* http://0.0.0.0 https://0.0.0.0 http://0.0.0.0:* https://0.0.0.0:* app://* file://* tauri://* vscode-webview://* vscode-file://*] OLLAMA_SCHED_SPREAD:false ROCR_VISIBLE_DEVICES: http_proxy: https_proxy: no_proxy:]" Apr 15 08:31:09 Error: mkdir /media/ollama-vol/models/blobs: permission denied Apr 15 08:31:09 2025-04-15 14:31:09,862 WARN exited: ollama (exit status 1; not expected) Apr 15 08:31:10 => Healtheck error: Error: connect ECONNREFUSED 172.18.16.191:8080
-
How to Import / synchronize a group of Cloudron's users to Nextcloud ?Amazing—this is a huge help @andreasdueren! It really improves the user experience. I'd love to see this kind of direct login option on more Cloudron apps, without the confusion of multiple options. Maybe this should be added to the official docs?
-
Ollama: permissions issue when using volume storage@joseph Yep, many times! Any other ideas?
-
Ollama: permissions issue when using volume storage@BrutalBirdie Thanks for sharing your experience—I'm using just the Filesystem mode in hopes that it would be most compatible.
@joseph Shouldn't the app automatically run ollama rather than requiring me to do it? And why wouldn't it work on the filesystem volume even when it is set up on
chmod 777
?Thanks all!
-
Ollama: permissions issue when using volume storageI've got OpenWebUI running successfully, but I can't get the local Ollama server running right. I have been able to connect the Anthropic API, so the problem is not with the interface in general.
In OpenWebUI's Connections settings, when I try to add a model via Ollama (http://127.0.0.1:11434), I get "Failed to fetch models" and "Server Connection Error."
Here's some output I'm getting on the server:
# ps -af | grep -w "ollama" root 242 108 0 03:21 pts/1 00:00:00 grep --color=auto -w ollama # ollama ls Error: could not connect to ollama app, is it running? # ollama serve Couldn't find '/root/.ollama/id_ed25519'. Generating new private key. Error: could not create directory mkdir /root/.ollama: read-only file system
There seems to be a filesystem permissions issue here. The problem does not occur under the default app settings. But it does when I use the recommended path of creating a separate volume for the models.
As instructed here, I created a volume in the Filesystem at
/media/ollama-vol
. It is set up under the Storage settings for the app with "Read and Write" permissions. I have tried bothchmod 775
andchmod 777
on that directory. In either case, when I start up the app, amodels/
directory is created in there. As instructed, I have added toenv.sh
this line:export OLLAMA_MODELS=/media/ollama-vol/models
Have other succeeded in getting this working by following these steps?
Thanks in advance for any help you can offer.
-
Best practices or guide for Nextcloud 5.0.4?On that point, I am seeing an issue: Despite being an admin account, after the update, under the Accounts preferences, for most accounts on NextCloud I am getting this message: "You do not have permissions to see the details of this account." (For some reason, the admin account and one additional user account, the most recently created one, are editable.) In particular, this means I am unable to add or remove accounts from groups, which is a major problem.
Does anyone have ideas about what could be causing this?
-
Best practices or guide for Nextcloud 5.0.4?Thanks all! It seems to have worked smoothly.
On this and other apps where Cloudron login is required, is there a way to remove the app's native login fields to avoid confusing users?
-
Best practices or guide for Nextcloud 5.0.4?Nextcloud is an absolutely mission-critical tool for my lab, and I'm seeing that the new update is a serious one:
I've noticed a number of discussions in this forum that suggest people are having trouble with this update. I'm also not sure what "manual update" consists of, and what kinds of problems I might currently expect.
Is there are current guide to this migration I might refer to? Based on what I'm seeing so far, I'm not sure that I am ready to safely proceed.
Thanks!
-
Listmonk not sending to full listI have sadly been planning to use AWS because a) I also need storage for backups and it would be nice to have only one more bill to pay, and b) my university has a contract.
-
Listmonk not sending to full listYeah. I wonder if it is being caused by Hostinger, the new VPS service I'm using. I'm trying to move email sending from SMTP to an external service. Do you think that could help?
-
Listmonk not sending to full list@nebulon Really? It seems to me that Cloudron is very much involved, since a change on Cloudron changed the app's behavior, and the app is trying to send to a different address than the one specified in Cloudron's management.
Could you say more about your reasoning?
-
Listmonk not sending to full listAfter using Listmonk successfully for a while now, I am running into a strange problem this morning. Two things happened:
Sending authentication error. I have the app set (in the Cloudron backend, under app > Email) to send from newsletter@my.cloudronserver.example. But I was getting an error saying:
550 Authenticated user newsletter@my.cloudronserver.example cannot send mail as newsletter.app@my.cloudronserver.example
In the Cloudron backend, under the main email preferences, I set my.cloudronserver.example to allow masquerading. I tried resending the newsletter, and it seemed to work! But then...
Sending to only part of the list. After the above steps, I got a problem I've never seen before. Listmonk says the campaign I sent is "Finished," but only some of the emails were received. On the Campaigns page, it says:
Sent 501 / 1,311
I don't see any evidence of a problem in the logs, which say:
Jan 21 11:35:59 2025/01/21 18:35:59.167357 manager.go:413: start processing campaign (Start of 2025 (take 2)) Jan 21 11:36:14 2025/01/21 18:36:14.601113 pipe.go:217: campaign (Start of 2025 (take 2)) finished
Can you think of any reason why not all of the messages would be sent? Why it would stop at 501 and consider the process finished? What can I do to send the rest of the messages?