The main issue with ollama and GPU access is, that nvidia requires a patched docker and we rely on the official ubuntu docker packages. We have to revisit this from time to time though as the space if evolving fast.
Also we only looked mostly into nvidia but for AMD and Intel, this might just work already by forwarding the corresponding /dev/dri device to the container, as they don't need a patched docker. However the struggle here was that we do not have access to such devices to test any of this. Most VPS provider only have nvidia chips to rent.
The alternative to run ollama on the host system should work fine actually, however then you have to manage ollama locally on your own, Cloudron cannot update this then also you are on your own about the system updates and a Cloudron platform update might break your system also, since we cannot test for such setups either. But if this is not a production server, this is probably fine.