Ok, I managed to make the application working from a backup.
The issue appears as soon as I connect to phpmyadmin.
I don't really understand why and what happens, but as long as I stay away from phpmyadmin, everything works fine.
Posts made by CarbonBee
-
RE: Mysql ssl error and auto-increment not working
-
RE: Mysql ssl error and auto-increment not working
@girish @msbt Thanks! This helped greatly.
But I still have my increment issue.
In the CLI, I can doANALYZE TABLE <my_table>
after an insert and it updates theinformation_schema
fields (TABLE_ROWS
,AUTO_INCREMENT
,...).
But when I insert again, nothing changes. I have to always make theANALYZE TABLE
to update the info.
Have you ever encounter this? -
RE: PostGreSql Database won't start
Hi,
I have the exact same problem after updating my Cloudron to v6.2.8.
It's been looping for more than an hour now.
I tried to restart Cloudron and even reboot my server.
How can I solve this problem?Thanks.
-
Mysql ssl error and auto-increment not working
Hi,
I have a strange behaviour with mysql in a Lamp container.
Here is the backgroud:- I want to have this project https://partkeepr.org/ on my 2 servers
- I requires php <= 7.1
- I use lamp v1.4.0 https://git.cloudron.io/cloudron/lamp-app/-/tags/v1.4.0
On one of my server (cloudron v6.0.1) it works fine.
But on the other one (cloudron v6.2.8), I have many problems :- In the terminal, when I try to connect to mysql CLI, I've got the error
ERROR 2026 (HY000): SSL connection error: unknown error number
so I have to use phpmyadmin to make database manipulation, - When I insert a line in a table, the new id (an AUTOINCREMENT field) is correctly incremented, but in the
information_schema
fields, neithertable_rows
norauto_increment
etc... fields are modified. I have a trigger on before insert which needs theauto_increment
value.
I have never seen such a behaviour and don't really know what to do.
Does anyone has any idea?
-
GitLab 1.43.0 Not Responding - key must be 32 bytes or longer
Hi,
I have a major issue on both my GitLab servers.I just updated them to 1.43.0 and both are not responding.
Here is the full log :Dec 02 08:07:22 ==> Enabling LDAP integration Dec 02 08:07:22 ==> Reusing existing host keys Dec 02 08:07:22 ==> Fixing secrets Dec 02 08:07:22 ==> Initializing gitlab shell Dec 02 08:07:22 ==> Copying schema file Dec 02 08:07:22 ==> Setting up tmp Dec 02 08:07:23 ==> Fixing permissions Dec 02 08:07:23 ==> Creating pg_trgm extension Dec 02 08:07:23 NOTICE: extension "pg_trgm" already exists, skipping Dec 02 08:07:23 ==> Upgrading existing db Dec 02 08:07:23 /usr/lib/ruby/vendor_ruby/rubygems/defaults/operating_system.rb:10: warning: constant Gem::ConfigMap is deprecated Dec 02 08:07:23 /usr/lib/ruby/vendor_ruby/rubygems/defaults/operating_system.rb:10: warning: constant Gem::ConfigMap is deprecated Dec 02 08:07:23 /usr/lib/ruby/vendor_ruby/rubygems/defaults/operating_system.rb:29: warning: constant Gem::ConfigMap is deprecated Dec 02 08:07:23 /usr/lib/ruby/vendor_ruby/rubygems/defaults/operating_system.rb:30: warning: constant Gem::ConfigMap is deprecated Dec 02 08:07:23 /usr/lib/ruby/vendor_ruby/rubygems/defaults/operating_system.rb:10: warning: constant Gem::ConfigMap is deprecated Dec 02 08:07:23 /usr/lib/ruby/vendor_ruby/rubygems/defaults/operating_system.rb:10: warning: constant Gem::ConfigMap is deprecated Dec 02 08:07:24 `/home/git` is not writable. Dec 02 08:07:24 Bundler will use `/tmp/bundler20201202-44-14wdz8y44' as your home directory temporarily. Dec 02 08:07:24 /usr/lib/ruby/vendor_ruby/rubygems/defaults/operating_system.rb:10: warning: constant Gem::ConfigMap is deprecated Dec 02 08:07:24 /usr/lib/ruby/vendor_ruby/rubygems/defaults/operating_system.rb:10: warning: constant Gem::ConfigMap is deprecated Dec 02 08:07:24 /usr/lib/ruby/vendor_ruby/rubygems/defaults/operating_system.rb:10: warning: constant Gem::ConfigMap is deprecated Dec 02 08:07:28 /usr/lib/ruby/vendor_ruby/rubygems/defaults/operating_system.rb:10: warning: constant Gem::ConfigMap is deprecated Dec 02 08:07:32 /usr/lib/ruby/vendor_ruby/rubygems/defaults/operating_system.rb:10: warning: constant Gem::ConfigMap is deprecated Dec 02 08:07:34 /usr/lib/ruby/vendor_ruby/rubygems/defaults/operating_system.rb:10: warning: constant Gem::ConfigMap is deprecated Dec 02 08:07:36 /usr/lib/ruby/vendor_ruby/rubygems/defaults/operating_system.rb:10: warning: constant Gem::ConfigMap is deprecated Dec 02 08:07:37 /home/git/gitlab/vendor/bundle/ruby/2.7.0/gems/acts-as-taggable-on-6.5.0/lib/acts_as_taggable_on/tagging.rb:9: warning: Using the last argument as keyword parameters is deprecated; maybe ** should be added to the call Dec 02 08:07:37 /home/git/gitlab/vendor/bundle/ruby/2.7.0/gems/activerecord-6.0.3.3/lib/active_record/associations.rb:1657: warning: The called method `belongs_to' is defined here Dec 02 08:07:37 /home/git/gitlab/vendor/bundle/ruby/2.7.0/gems/marginalia-1.9.0/lib/marginalia.rb:94: warning: Using the last argument as keyword parameters is deprecated; maybe ** should be added to the call Dec 02 08:07:37 /home/git/gitlab/vendor/bundle/ruby/2.7.0/gems/activerecord-6.0.3.3/lib/active_record/connection_adapters/postgresql_adapter.rb:648: warning: The called method `execute_and_clear_without_marginalia' is defined here Dec 02 08:07:43 == 20201008013434 GenerateCiJwtSigningKey: migrating ========================== Dec 02 08:07:43 rake aborted! Dec 02 08:07:43 StandardError: An error has occurred, this and all later migrations canceled: Dec 02 08:07:43 Dec 02 08:07:43 key must be 32 bytes or longer Dec 02 08:07:43 /home/git/gitlab/vendor/bundle/ruby/2.7.0/gems/encryptor-3.0.0/lib/encryptor.rb:60:in `crypt' Dec 02 08:07:43 /home/git/gitlab/vendor/bundle/ruby/2.7.0/gems/encryptor-3.0.0/lib/encryptor.rb:36:in `encrypt' Dec 02 08:07:43 /home/git/gitlab/vendor/bundle/ruby/2.7.0/gems/attr_encrypted-3.1.0/lib/attr_encrypted.rb:266:in `encrypt' Dec 02 08:07:43 /home/git/gitlab/vendor/bundle/ruby/2.7.0/gems/attr_encrypted-3.1.0/lib/attr_encrypted.rb:350:in `encrypt' Dec 02 08:07:43 /home/git/gitlab/vendor/bundle/ruby/2.7.0/gems/attr_encrypted-3.1.0/lib/attr_encrypted.rb:165:in `block (2 levels) in attr_encrypted' Dec 02 08:07:43 /home/git/gitlab/vendor/bundle/ruby/2.7.0/gems/attr_encrypted-3.1.0/lib/attr_encrypted/adapters/active_record.rb:77:in `block in attr_encrypted' Dec 02 08:07:43 /home/git/gitlab/vendor/bundle/ruby/2.7.0/gems/activemodel-6.0.3.3/lib/active_model/attribute_assignment.rb:50:in `public_send' Dec 02 08:07:43 /home/git/gitlab/vendor/bundle/ruby/2.7.0/gems/activemodel-6.0.3.3/lib/active_model/attribute_assignment.rb:50:in `_assign_attribute' Dec 02 08:07:43 /home/git/gitlab/vendor/bundle/ruby/2.7.0/gems/activemodel-6.0.3.3/lib/active_model/attribute_assignment.rb:43:in `block in _assign_attributes' Dec 02 08:07:43 /home/git/gitlab/vendor/bundle/ruby/2.7.0/gems/activemodel-6.0.3.3/lib/active_model/attribute_assignment.rb:42:in `each' Dec 02 08:07:43 /home/git/gitlab/vendor/bundle/ruby/2.7.0/gems/activemodel-6.0.3.3/lib/active_model/attribute_assignment.rb:42:in `_assign_attributes' Dec 02 08:07:43 /home/git/gitlab/vendor/bundle/ruby/2.7.0/gems/activerecord-6.0.3.3/lib/active_record/attribute_assignment.rb:21:in `_assign_attributes' Dec 02 08:07:43 /home/git/gitlab/vendor/bundle/ruby/2.7.0/gems/activemodel-6.0.3.3/lib/active_model/attribute_assignment.rb:35:in `assign_attributes' Dec 02 08:07:43 /home/git/gitlab/vendor/bundle/ruby/2.7.0/gems/attr_encrypted-3.1.0/lib/attr_encrypted/adapters/active_record.rb:29:in `perform_attribute_assignment' Dec 02 08:07:43 /home/git/gitlab/vendor/bundle/ruby/2.7.0/gems/attr_encrypted-3.1.0/lib/attr_encrypted/adapters/active_record.rb:36:in `assign_attributes' Dec 02 08:07:43 /home/git/gitlab/vendor/bundle/ruby/2.7.0/gems/activerecord-6.0.3.3/lib/active_record/persistence.rb:620:in `block in update' Dec 02 08:07:43 /home/git/gitlab/vendor/bundle/ruby/2.7.0/gems/activerecord-6.0.3.3/lib/active_record/transactions.rb:375:in `block in with_transaction_returning_status' Dec 02 08:07:43 /home/git/gitlab/vendor/bundle/ruby/2.7.0/gems/activerecord-6.0.3.3/lib/active_record/connection_adapters/abstract/database_statements.rb:278:in `transaction' Dec 02 08:07:43 /home/git/gitlab/vendor/bundle/ruby/2.7.0/gems/activerecord-6.0.3.3/lib/active_record/transactions.rb:212:in `transaction' Dec 02 08:07:43 /home/git/gitlab/vendor/bundle/ruby/2.7.0/gems/activerecord-6.0.3.3/lib/active_record/transactions.rb:366:in `with_transaction_returning_status' Dec 02 08:07:43 /home/git/gitlab/vendor/bundle/ruby/2.7.0/gems/activerecord-6.0.3.3/lib/active_record/persistence.rb:619:in `update' Dec 02 08:07:43 /home/git/gitlab/db/migrate/20201008013434_generate_ci_jwt_signing_key.rb:21:in `block in up' Dec 02 08:07:43 /home/git/gitlab/vendor/bundle/ruby/2.7.0/gems/activerecord-6.0.3.3/lib/active_record/relation/batches.rb:70:in `block (2 levels) in find_each' Dec 02 08:07:43 /home/git/gitlab/vendor/bundle/ruby/2.7.0/gems/activerecord-6.0.3.3/lib/active_record/relation/batches.rb:70:in `each' Dec 02 08:07:43 /home/git/gitlab/vendor/bundle/ruby/2.7.0/gems/activerecord-6.0.3.3/lib/active_record/relation/batches.rb:70:in `block in find_each' Dec 02 08:07:43 /home/git/gitlab/vendor/bundle/ruby/2.7.0/gems/activerecord-6.0.3.3/lib/active_record/relation/batches.rb:136:in `block in find_in_batches' Dec 02 08:07:43 /home/git/gitlab/vendor/bundle/ruby/2.7.0/gems/activerecord-6.0.3.3/lib/active_record/relation/batches.rb:238:in `block in in_batches' Dec 02 08:07:43 /home/git/gitlab/vendor/bundle/ruby/2.7.0/gems/activerecord-6.0.3.3/lib/active_record/relation/batches.rb:222:in `loop' Dec 02 08:07:43 /home/git/gitlab/vendor/bundle/ruby/2.7.0/gems/activerecord-6.0.3.3/lib/active_record/relation/batches.rb:222:in `in_batches' Dec 02 08:07:43 /home/git/gitlab/vendor/bundle/ruby/2.7.0/gems/activerecord-6.0.3.3/lib/active_record/relation/batches.rb:135:in `find_in_batches' Dec 02 08:07:43 /home/git/gitlab/vendor/bundle/ruby/2.7.0/gems/activerecord-6.0.3.3/lib/active_record/relation/batches.rb:69:in `find_each' Dec 02 08:07:43 /home/git/gitlab/vendor/bundle/ruby/2.7.0/gems/activerecord-6.0.3.3/lib/active_record/querying.rb:21:in `find_each' Dec 02 08:07:43 /home/git/gitlab/db/migrate/20201008013434_generate_ci_jwt_signing_key.rb:20:in `up' Dec 02 08:07:43 /home/git/gitlab/vendor/bundle/ruby/2.7.0/gems/activerecord-6.0.3.3/lib/active_record/migration.rb:831:in `exec_migration' Dec 02 08:07:43 /home/git/gitlab/vendor/bundle/ruby/2.7.0/gems/activerecord-6.0.3.3/lib/active_record/migration.rb:812:in `block (2 levels) in migrate' Dec 02 08:07:43 /home/git/gitlab/vendor/bundle/ruby/2.7.0/gems/activerecord-6.0.3.3/lib/active_record/migration.rb:811:in `block in migrate' Dec 02 08:07:43 /home/git/gitlab/vendor/bundle/ruby/2.7.0/gems/activerecord-6.0.3.3/lib/active_record/connection_adapters/abstract/connection_pool.rb:471:in `with_connection' Dec 02 08:07:43 /home/git/gitlab/vendor/bundle/ruby/2.7.0/gems/activerecord-6.0.3.3/lib/active_record/migration.rb:810:in `migrate' Dec 02 08:07:43 /home/git/gitlab/vendor/bundle/ruby/2.7.0/gems/activerecord-6.0.3.3/lib/active_record/migration.rb:1002:in `migrate' Dec 02 08:07:43 /home/git/gitlab/vendor/bundle/ruby/2.7.0/gems/activerecord-6.0.3.3/lib/active_record/migration.rb:1310:in `block in execute_migration_in_transaction' Dec 02 08:07:43 /home/git/gitlab/vendor/bundle/ruby/2.7.0/gems/activerecord-6.0.3.3/lib/active_record/migration.rb:1361:in `block in ddl_transaction' Dec 02 08:07:43 /home/git/gitlab/vendor/bundle/ruby/2.7.0/gems/activerecord-6.0.3.3/lib/active_record/connection_adapters/abstract/database_statements.rb:280:in `block in transaction' Dec 02 08:07:43 /home/git/gitlab/vendor/bundle/ruby/2.7.0/gems/activerecord-6.0.3.3/lib/active_record/connection_adapters/abstract/transaction.rb:280:in `block in within_new_transaction' Dec 02 08:07:43 /home/git/gitlab/vendor/bundle/ruby/2.7.0/gems/activesupport-6.0.3.3/lib/active_support/concurrency/load_interlock_aware_monitor.rb:26:in `block (2 levels) in synchronize' Dec 02 08:07:43 /home/git/gitlab/vendor/bundle/ruby/2.7.0/gems/activesupport-6.0.3.3/lib/active_support/concurrency/load_interlock_aware_monitor.rb:25:in `handle_interrupt' Dec 02 08:07:43 /home/git/gitlab/vendor/bundle/ruby/2.7.0/gems/activesupport-6.0.3.3/lib/active_support/concurrency/load_interlock_aware_monitor.rb:25:in `block in synchronize' Dec 02 08:07:43 /home/git/gitlab/vendor/bundle/ruby/2.7.0/gems/activesupport-6.0.3.3/lib/active_support/concurrency/load_interlock_aware_monitor.rb:21:in `handle_interrupt' Dec 02 08:07:43 /home/git/gitlab/vendor/bundle/ruby/2.7.0/gems/activesupport-6.0.3.3/lib/active_support/concurrency/load_interlock_aware_monitor.rb:21:in `synchronize' Dec 02 08:07:43 /home/git/gitlab/vendor/bundle/ruby/2.7.0/gems/activerecord-6.0.3.3/lib/active_record/connection_adapters/abstract/transaction.rb:278:in `within_new_transaction' Dec 02 08:07:43 /home/git/gitlab/vendor/bundle/ruby/2.7.0/gems/activerecord-6.0.3.3/lib/active_record/connection_adapters/abstract/database_statements.rb:280:in `transaction' Dec 02 08:07:43 /home/git/gitlab/vendor/bundle/ruby/2.7.0/gems/activerecord-6.0.3.3/lib/active_record/transactions.rb:212:in `transaction' Dec 02 08:07:43 /home/git/gitlab/vendor/bundle/ruby/2.7.0/gems/activerecord-6.0.3.3/lib/active_record/migration.rb:1361:in `ddl_transaction' Dec 02 08:07:43 /home/git/gitlab/vendor/bundle/ruby/2.7.0/gems/activerecord-6.0.3.3/lib/active_record/migration.rb:1309:in `execute_migration_in_transaction' Dec 02 08:07:43 /home/git/gitlab/vendor/bundle/ruby/2.7.0/gems/activerecord-6.0.3.3/lib/active_record/migration.rb:1281:in `block in migrate_without_lock' Dec 02 08:07:43 /home/git/gitlab/vendor/bundle/ruby/2.7.0/gems/activerecord-6.0.3.3/lib/active_record/migration.rb:1280:in `each' Dec 02 08:07:43 /home/git/gitlab/vendor/bundle/ruby/2.7.0/gems/activerecord-6.0.3.3/lib/active_record/migration.rb:1280:in `migrate_without_lock' Dec 02 08:07:43 /home/git/gitlab/vendor/bundle/ruby/2.7.0/gems/activerecord-6.0.3.3/lib/active_record/migration.rb:1229:in `block in migrate' Dec 02 08:07:43 /home/git/gitlab/vendor/bundle/ruby/2.7.0/gems/activerecord-6.0.3.3/lib/active_record/migration.rb:1382:in `with_advisory_lock' Dec 02 08:07:43 /home/git/gitlab/vendor/bundle/ruby/2.7.0/gems/activerecord-6.0.3.3/lib/active_record/migration.rb:1229:in `migrate' Dec 02 08:07:43 /home/git/gitlab/vendor/bundle/ruby/2.7.0/gems/activerecord-6.0.3.3/lib/active_record/migration.rb:1061:in `up' Dec 02 08:07:43 /home/git/gitlab/vendor/bundle/ruby/2.7.0/gems/activerecord-6.0.3.3/lib/active_record/migration.rb:1036:in `migrate' Dec 02 08:07:43 /home/git/gitlab/vendor/bundle/ruby/2.7.0/gems/activerecord-6.0.3.3/lib/active_record/tasks/database_tasks.rb:238:in `migrate' Dec 02 08:07:43 /home/git/gitlab/vendor/bundle/ruby/2.7.0/gems/activerecord-6.0.3.3/lib/active_record/railties/databases.rake:86:in `block (3 levels) in <top (required)>' Dec 02 08:07:43 /home/git/gitlab/vendor/bundle/ruby/2.7.0/gems/activerecord-6.0.3.3/lib/active_record/railties/databases.rake:84:in `each' Dec 02 08:07:43 /home/git/gitlab/vendor/bundle/ruby/2.7.0/gems/activerecord-6.0.3.3/lib/active_record/railties/databases.rake:84:in `block (2 levels) in <top (required)>' Dec 02 08:07:43 /home/git/gitlab/vendor/bundle/ruby/2.7.0/gems/rake-13.0.1/exe/rake:27:in `<top (required)>'
I'm filling a support ticket right now, but if anyone has an idea on that issue...
-
Nextcloud 20.0.2 not responding, key is duplicated
Hi,
I just updated my NextCloud to 20.0.2, and it's not responding.
Logs said :
Nov 30 08:17:29 Check primary keys. Nov 30 08:17:29 Adding primary key to the filecache_extended table, this can take some time... Nov 30 08:17:29 Nov 30 08:17:29 In AbstractPostgreSQLDriver.php line 51: Nov 30 08:17:29 Nov 30 08:17:29 An exception occurred while executing 'ALTER TABLE oc_filecache_extended AD Nov 30 08:17:29 D PRIMARY KEY (fileid)': Nov 30 08:17:29 Nov 30 08:17:29 SQLSTATE[23505]: Unique violation: 7 ERROR: could not create unique index Nov 30 08:17:29 "oc_filecache_extended_pkey" Nov 30 08:17:29 DETAIL: Key (fileid)=(169141) is duplicated. Nov 30 08:17:29 Nov 30 08:17:29 Nov 30 08:17:29 In PDOConnection.php line 83: Nov 30 08:17:29 Nov 30 08:17:29 SQLSTATE[23505]: Unique violation: 7 ERROR: could not create unique index Nov 30 08:17:29 "oc_filecache_extended_pkey" Nov 30 08:17:29 DETAIL: Key (fileid)=(169141) is duplicated. Nov 30 08:17:29 Nov 30 08:17:29 Nov 30 08:17:29 In PDOConnection.php line 78: Nov 30 08:17:29 Nov 30 08:17:29 SQLSTATE[23505]: Unique violation: 7 ERROR: could not create unique index Nov 30 08:17:29 "oc_filecache_extended_pkey" Nov 30 08:17:29 DETAIL: Key (fileid)=(169141) is duplicated. Nov 30 08:17:29 Nov 30 08:17:29 Nov 30 08:17:29 db:add-missing-primary-keys
And it's restarting every minutes.
Could someone help me on that?Thanks,
-
RE: Cloudron update exited with code 1 and no space left in /boot
@msbt Well, any call to
apt-get
tells me thatYou might want to run 'apt-get -f install' to correct these. The following packages have unmet dependencies: linux-image-generic
Which of course I can't do because of the lack of space in
/boot
.
I followed parts of this link :
dpkg --list 'linux-image*'|awk '{ if($1=="ii") print $2}'|grep -v
uname -r``
rm -rf /boot/*-4.4.0-{161,164,178,179,184,185,186,187}-*
apt-get -f install
apt-get autoremove
And it worked fine!
-
Cloudron update exited with code 1 and no space left in /boot
Hi,
Cloudron update are failing for some day, and the message is
update exited with code 1 signal null
I went to
journalctl -u cloudron-updater
and found:Nov 15 21:23:43 my.domain.com installer.sh[354]: dpkg: dependency problems prevent configuration of linux-image-generic: Nov 15 21:23:43 my.domain.com installer.sh[354]: linux-image-generic depends on linux-image-4.4.0-190-generic | linux-image-unsigned-4.4.0-190-generic; however: Nov 15 21:23:43 my.domain.com installer.sh[354]: Package linux-image-4.4.0-190-generic is not installed. Nov 15 21:23:43 my.domain.com installer.sh[354]: Package linux-image-unsigned-4.4.0-190-generic is not installed. Nov 15 21:23:43 my.domain.com installer.sh[354]: linux-image-generic depends on linux-modules-extra-4.4.0-190-generic; however: Nov 15 21:23:43 my.domain.com installer.sh[354]: Package linux-modules-extra-4.4.0-190-generic is not configured yet. Nov 15 21:23:43 my.domain.com installer.sh[354]: dpkg: error processing package linux-image-generic (--configure): Nov 15 21:23:43 my.domain.com installer.sh[354]: dependency problems - leaving unconfigured Nov 15 21:23:43 my.domain.com installer.sh[354]: Processing triggers for linux-image-4.4.0-189-generic (4.4.0-189.219) ... Nov 15 21:23:44 my.domain.com installer.sh[354]: /etc/kernel/postinst.d/initramfs-tools: Nov 15 21:23:44 my.domain.com installer.sh[354]: update-initramfs: Generating /boot/initrd.img-4.4.0-189-generic Nov 15 21:23:49 my.domain.com installer.sh[354]: cat: write error: No space left on device Nov 15 21:23:49 my.domain.com installer.sh[354]: update-initramfs: failed for /boot/initrd.img-4.4.0-189-generic with 1. Nov 15 21:23:49 my.domain.com installer.sh[354]: run-parts: /etc/kernel/postinst.d/initramfs-tools exited with return code 1 Nov 15 21:23:49 my.domain.com installer.sh[354]: dpkg: error processing package linux-image-4.4.0-189-generic (--configure): Nov 15 21:23:49 my.domain.com installer.sh[354]: subprocess installed post-installation script returned error exit status 1 Nov 15 21:23:49 my.domain.com installer.sh[354]: Errors were encountered while processing: Nov 15 21:23:49 my.domain.com installer.sh[354]: linux-modules-extra-4.4.0-190-generic Nov 15 21:23:49 my.domain.com installer.sh[354]: linux-image-generic Nov 15 21:23:49 my.domain.com installer.sh[354]: linux-image-4.4.0-189-generic Nov 15 21:23:49 my.domain.com installer.sh[354]: ==> installer: Failed to fix packages. Retry
I saw the line
No space left on device
and indeeddf -h
said/dev/md2 487M 482M 0 100% /boot
What is weird is thatuname -nar
says4.4.0-154-generic
but in/boot
I haveinitrd.img-4.4.0-104-generic initrd.img-4.4.0-112-generic initrd.img-4.4.0-154-generic initrd.img-4.4.0-161-generic initrd.img-4.4.0-164-generic initrd.img-4.4.0-178-generic initrd.img-4.4.0-179-generic initrd.img-4.4.0-184-generic initrd.img-4.4.0-185-generic initrd.img-4.4.0-186-generic initrd.img-4.4.0-187-generic initrd.img-4.4.0-189-generic.new
Finally, I tired
dpkg --confiure -a
and mydpkg
is locked :dpkg: error: dpkg frontend is locked by another process
. Usinglsof
andps
, I found it is used byroot 1213 0.0 0.0 65512 3000 ? Ss 2019 37:23 /usr/sbin/sshd -D
So, what should I do? Remove every files that are not
4.4.0-154-generic
? Can I kill this sshd process that is lockingdpkg
? -
RE: Backups are not removed from aws after retention passed
@girish Well, this issue still happens and for both our servers. I just went to my AWS console to remove a backup made the 5th which were no more existing in the Cloudron dashboard (I have a retention of 1 week).
-
RE: Backups are not removed from aws after retention passed
@nebulon Well, as this is production servers, I'd rather not remove our daily backups.
-
RE: Backups are not removed from aws after retention passed
@nebulon It's tarball backup.
-
Backups are not removed from aws after retention passed
Hi,
I just went to my aws dashboard and I was surprised by the size taken by my Cloudron backups. I told Cloudron to make a backup every day and have a retention of a week.
After further investigations, all my backups still appears in aws, although only the last seven days appears in Cloudron.
It seems that Cloudron just "forget" about backups that passed retention, but does not remove it from AWS. -
RE: Aws Backup error : EPIPE HTTP Code : NetworkingError
Well, a technician from Kimsufi just told me that there are no upload limit on my server. I don't really understand what's going on, because on more or less the same setup on a different server, it works fine.
Is there a way to track it down and find out where the issue come from? -
RE: Aws Backup error : EPIPE HTTP Code : NetworkingError
@girish I tried to backup only the nextcloud, which generated the same error.
I have no idea on any upload limitation. My server is hosted by Kimsufi, an OVH discounter. -
RE: Aws Backup error : EPIPE HTTP Code : NetworkingError
It's always doing that. I haven't been able to make a full backup for days. This begins to be critical.
-
Aws Backup error : EPIPE HTTP Code : NetworkingError
Hi!
I recently setup my Cloudron server to backup with aws s3 everyday. The last few days, the backup ended when trying to upload my nextcloud backup (which weights more than 100GB).
The logs report the following (for security purpose, I removed the domain name, the application ID and the aws id) :Oct 20 03:29:26 box:tasks 5308: {"percent":87.66666666666667,"message":"Uploading backup 392M@1MBps (cloud.mydomain.fr)"} Oct 20 03:29:37 box:tasks 5308: {"percent":87.66666666666667,"message":"Uploading backup 104M@0MBps (cloud.mydomain.fr)"} Oct 20 03:30:41 box:tasks 5308: {"percent":87.66666666666667,"message":"Uploading backup 392M@1MBps (cloud.mydomain.fr)"} Oct 20 03:30:55 box:tasks 5308: {"percent":87.66666666666667,"message":"Uploading backup 1404M@2MBps (cloud.mydomain.fr)"} Oct 20 03:34:06 box:tasks 5308: {"percent":87.66666666666667,"message":"Uploading backup 104M@0MBps (cloud.mydomain.fr)"} Oct 20 03:34:15 box:tasks 5308: {"percent":87.66666666666667,"message":"Uploading backup 877M@1MBps (cloud.mydomain.fr)"} Oct 20 03:34:37 box:tasks 5308: {"percent":87.66666666666667,"message":"Uploading backup 104M@0MBps (cloud.mydomain.fr)"} Oct 20 03:37:31 box:shell backup-snapshot/app_<appId> (stdout): 2020-10-20T01:37:18.907Z box:storage/s3 Error uploading [snapshot/app_<appId>.tar.gz.enc]: s3 upload error. { Error: write EPIPE at WriteWrap.afterWrite [as oncomplete] (net.js:789:14) message: 'write EPIPE', errno: 'EPIPE', code: 'NetworkingError', syscall: 'write', region: 'eu-west-3', hostname: '<s3Id>.s3.eu-west-3.amazonaws.com', retryable: true, time: 2020-10-20T01:36:48.195Z } Oct 20 03:37:19 box:backupupload upload completed. error: { BoxError: Error uploading snapshot/app_<appId>.tar.gz.enc. Message: write EPIPE HTTP Code: NetworkingError at ManagedUpload.callback (/home/yellowtent/box/src/storage/s3.js:130:33) at ManagedUpload.cleanup (/home/yellowtent/box/node_modules/aws-sdk/lib/s3/managed_upload.js:629:10) at Response.<anonymous> (/home/yellowtent/box/node_modules/aws-sdk/lib/s3/managed_upload.js:566:28) at Request.<anonymous> (/home/yellowtent/box/node_modules/aws-sdk/lib/request.js:364:18) at Request.callListeners (/home/yellowtent/box/node_modules/aws-sdk/lib/sequential_executor.js:106:20) at Request.emit (/home/yellowtent/box/node_modules/aws-sdk/lib/sequential_executor.js:78:10) at Request.emit (/home/yellowtent/box/node_modules/aws-sdk/lib/request.js:683:14) at Request.transition (/home/yellowtent/box/node_modules/aws-sdk/lib/request.js:22:10) at AcceptorStateMachine.runTo (/home/yellowtent/box/node_modules/aws-sdk/lib/state_machine.js:14:12) at /home/yellowtent/box/node_modules/aws-sdk/lib/state_machine.js:26:10 name: 'BoxError', reason: 'External Error', details: {}, message: 'Error uploading snapshot/app_<appId>.tar.gz.enc. Message: write EPIPE HTTP Code: NetworkingError' } Oct 20 03:37:41 box:backups runBackupUpload: result - {"result":"Error uploading snapshot/app_<appId>.tar.gz.enc. Message: write EPIPE HTTP Code: NetworkingError"} Oct 20 03:39:26 box:shell backup-snapshot/app_<appId> code: 50, signal: null Oct 20 03:39:30 box:backups cloud.mydomain.fr Unable to backup { BoxError: Error uploading snapshot/app_<appId>.tar.gz.enc. Message: write EPIPE HTTP Code: NetworkingError at /home/yellowtent/box/src/backups.js:863:29 at f (/home/yellowtent/box/node_modules/once/once.js:25:25) at ChildProcess.<anonymous> (/home/yellowtent/box/src/shell.js:69:9) at ChildProcess.emit (events.js:198:13) at ChildProcess.EventEmitter.emit (domain.js:448:20) at Process.ChildProcess._handle.onexit (internal/child_process.js:248:12) name: 'BoxError', reason: 'External Error', details: {}, message: 'Error uploading snapshot/app_<appId>.tar.gz.enc. Message: write EPIPE HTTP Code: NetworkingError' } Oct 20 03:39:31 box:taskworker Task took 16766.54 seconds Oct 20 03:39:31 box:tasks setCompleted - 5308: {"result":null,"error":{"stack":"BoxError: Error uploading snapshot/app_<appId>.tar.gz.enc. Message: write EPIPE HTTP Code: NetworkingError\n at /home/yellowtent/box/src/backups.js:863:29\n at f (/home/yellowtent/box/node_modules/once/once.js:25:25)\n at ChildProcess.<anonymous> (/home/yellowtent/box/src/shell.js:69:9)\n at ChildProcess.emit (events.js:198:13)\n at ChildProcess.EventEmitter.emit (domain.js:448:20)\n at Process.ChildProcess._handle.onexit (internal/child_process.js:248:12)","name":"BoxError","reason":"External Error","details":{},"message":"Error uploading snapshot/app_<appId>.tar.gz.enc. Message: write EPIPE HTTP Code: NetworkingError"}} Oct 20 03:39:32 box:tasks 5308: {"percent":100,"result":null,"error":{"stack":"BoxError: Error uploading snapshot/app_<appId>.tar.gz.enc. Message: write EPIPE HTTP Code: NetworkingError\n at /home/yellowtent/box/src/backups.js:863:29\n at f (/home/yellowtent/box/node_modules/once/once.js:25:25)\n at ChildProcess.<anonymous> (/home/yellowtent/box/src/shell.js:69:9)\n at ChildProcess.emit (events.js:198:13)\n at ChildProcess.EventEmitter.emit (domain.js:448:20)\n at Process.ChildProcess._handle.onexit (internal/child_process.js:248:12)","name":"BoxError","reason":"External Error","details":{},"message":"Error uploading snapshot/app_<appId>.tar.gz.enc. Message: write EPIPE HTTP Code: NetworkingError"}}
I tried with 10MB and 100MB upload part size, but this does not seems to change anything.
Do you have any idea on what's going on?Thanks!
-
RE: Jingo nginx installation problem
I patch the my.domainname.fr.conf with the missing part (taken from my other Cloudron server).
I then uninstalled and re-installed jingo, and everything seems to work fine now. -
RE: Jingo nginx installation problem
Ok, it seems there is some issues in nginx conf files.
Theapplication/my.domainname.fr.conf
ends brutally on# only serve up the status page if we get proxy gateway errors root /home/yellowtent/box/das
And nginx tells me that
invalid port in upstream "127.0.0.1:"
for the jingo nginx conf.
Any idea on what's going on?
-
Jingo nginx installation problem
Hi!
I'm trying to install jingo to my Cloudron server, but I've got the following error :
"Nginx Error: Error reloading nginx: reload exited with code 1 signal null"
I checked but everything seems right with my nginx.Do you have any idea on why it's happening ?
Thanks
-
Cloudron ID on subscription invoices
Hi!
On the subscription invoices, could it be possible to add the cloudron ID or the domain name or anything that can identify a server from another? We have 3 servers, thus 3 invoices every month, and we can not which invoice goes with which server.
Thanks!
-
RE: Best aws s3 backup storage class
@robi That would be interesting, but that does not solve my current problem, which is: "How to backup Cloudron on s3 glacier deep archive".
One solution would be to backup on a regular s3 and then use aws lifecycle rule to put them in glacier. But then, it would not be accessible through Glacier API, only s3's, and it would be costly to do the transition from s3 to Glacier.
In a near future, could it be possible to integrate Glacier API?
-
RE: Best aws s3 backup storage class
@marcusquinn said in Best aws s3 backup storage class:
How much are you storing?
Relevent question indeed, I missed it sorry.
Well, my servers backups wieght 145Go and 190Go respectively. I backup them daily and I would like to have at least one week of backup history.
That means a bit more than 2To. -
RE: Best aws s3 backup storage class
@marcusquinn These are the basic s3 plan prices. The Glacier Deep Archive is 0.0018$/Go : https://aws.amazon.com/fr/s3/pricing/.
Moreover we will use this solution for other needs (which require aws) and I would like to use the same service for all our applications, which includes cloudron backups. -
RE: Best aws s3 backup storage class
@luckow I understand your point, which I find really interesting. However, my point of view is slightly different: I've been running Cloudron on 2 servers for more than 4 years now and I have never nedded backup restoration. The event is rare enough to cost half a day or more if it happens. The second point is the price. With Glacier Deep Archive, a To cost ~1$/month.
-
RE: Best aws s3 backup storage class
@nebulon Well, when I try to save my backup storage, I've got an error "The specified bucket does not exist". So it seems that it is not that compatible.
Any chance to fix that?
-
RE: Best aws s3 backup storage class
Other questions linked to this:
- What is the difference, from the Cloudron point of view, between s3 buckets and glacier vaults?
- Does Cloudron support Glacier?
-
Best aws s3 backup storage class
Hi!
I'm new to aws and I wonder what would be the best storage class for cloudron backup? Would S3 Glacier Deep Archive be too slow?
Thanks for your advices.
-
RE: Use an external LDAP provider
I didn't thought of any specific LDAP server. It would be great to connect Cloudron to any external LDAP server, that would manage groups and users. For example, connect a Cloudron server to another one so that only one Cloudron server manages the users and groups for both servers.
-
RE: dolibarr - ERP & CRM for Business
As a company, we use a self-hosted dolibarr on an external server. It would be really usefull to have it directly in Cloudron!
-
RE: diagrams.net (was draw.io) - most popular open source diagram editor
Yes, this would be really great!
-
Automatically link CALDAV invitation response mail to nextcloud calendar
Hi,
When creating the event, a mail is sent from Cloudron server with the author mail. Then, when an attendee answers from its client, it sends a mail to the author's cloudron mailbox.
Is there a way to link this answer mail to the nextcloud calendar so the attendee status is automatically updated?
Thanks!
-
Use an external LDAP provider
Hi,
I think there should be a way to use an external LDAP provider for Cloudron. It would enable Cloudron to be more easily integrated into a greater architecture. It links to another suggestion, to make LDAP groups accessible.
Thanks!
-
Add more choice for backuping planning
Hi,
I think there should be more options to plan the server backuping. Instead of pre-defined options for delays, let the user choose the hours / days both for backup interval and conservation delay, and the moment it should backup.
And even more, make an option to delete old backup except some.
For example, I would like to backup my server every day at 10pm, and keep only :- the 7 previous days backup
- the first day of each weeks of the current month
- the first day of each previous monthes
I know this is a complex case, but there should be a way to let the user make a custom backup plan.
Thanks!
-
Issues on server migration
Hi,
I've got an issue when I want to restore a Cloudron backup on another server. I want to clone my Cloudron server A on another server B : it's what the doc calls "migration" .
I did that to test a complete backup of my main server, while keeping it alive.What is going on:
- Backup of Cloudron A (running on Ubuntu 16.04)
- Installation of same version Cloudron on ther fresh server B (Ubuntu 18.04)
- Copying Cloudron A backup on server B
- Setting up the DNS of Cloudron B
- Using "Looking to restore" and providing all information
- Near the end, a message saying "Not found" is displayed
- Going to
my.<server B>
, login prompt is displayed, but it redirects tomy.<server A>/login_callback.html?token=<??>&state=<??>
- Subdomains B are unreachable, returning 404 errors
- After rebooting,
my.<server B>/#/apps
is reachable - Cloudron B has the same Cloudron ID as Cloudron A
- Subdomains are reachable, but all application are in "Restoring (pending)" status
- When doing cloudron inspect:
- apiEndpoint is
<server B>
- every
<app>.domain
is set to<server A>
and<app>.fqdn
are<app>.<server A>
- apiEndpoint is
- Users on server B are correct as well as groups
- When going to "Domains & Certs", an "Internal Error" message appears and "Domains" section is empty
- When adding domain
<server B>
with a wildcard DNS API provider, it still displays "Internal Error" and an empty domain list - When repairing applications and setting their domain to
<server B>
, they are still in "error" status - When changing all database entries where
<server A>
was mentionned to<server B>
, they are still in "error" status - Deleting all application then restoring one by one from backup seemed to work, but not for a long time. After a few day, more than half of the applications were in "error" status.
One should be able to clone a Cloudron server easily from a backup, even if the DNS is different. And what are the issue of having two different Cloudron servers with the same ID ?
I submit a new issue for this.
-
Make Cloudron groups accessible on LDAP
Hello,
It could be great and really usefull for many apps (Nextcloud, DokuWiki, Git...) to make the Cloudron groups accessible on LDAP.
Could it be possible?
Thanks!
-
RE: Webadmin error on manual update of self-hosting free-plan
@darkben Well, that's exactly what I did, but it ends with the OAuth error. I just fixed it (see my last answer
).
-
RE: Webadmin error on manual update of self-hosting free-plan
Ok, thanks to a fellow cloudron user on the chat (many thanks robi !), I just made my admin page up again. I just copied my
webadmin
directory on my server to adashboard
directory on the same level, and yay! the OAuth error just disapeared! -
RE: Webadmin error on manual update of self-hosting free-plan
@carbonbee What I don't understand is why, during the build on the server with the
hotfix
methode, adashboard
directory is created, instead of awebadmin
? Because almost everything seems to relies on thiswebadmin
directory. -
RE: Webadmin error on manual update of self-hosting free-plan
@nebulon Well, if I understand correctly, either I update my cloudron to the master branch, or I wait for the next release that, I hope, will correct that issue. Is there another option, isn't it possible to fix it?
-
RE: Webadmin error on manual update of self-hosting free-plan
@nebulon Well, in the
createReleaseTarball
script, line 32, it searches for thewebadmin
directory. So I renamed mydashboard
directory intowebadmin
.
I hotfix my Cloudron, and it gives me the error :/home/yellowtent/box/setup/start.sh: line 218: /home/yellowtent/box/webadmin/dist/config.json: No such file or directory
And now my Cloudron is down. I rename
box/dashboard
intobox/webadmin
, I copy the oldwebadmin/dist/config.json
into it, then I call againbox/setup/start.sh
. But now myconfig/cloudron.conf
is a bit empty, so I replace it by a backup. I restart withsystemctl restart box
, and at last, my Cloudron is up again. I can access to all my apps! But the cloudron page itself is no more available because of anUnknown OAuth client
.
So is the current state of my 2.0.1 Cloudron. -
Webadmin error on manual update of self-hosting free-plan
Hi everyone!
I've got a huge issue on manually updating my free-plan self-hosting cloudron from 1.11.0 to 2.0.0. I use the documented methode on the git, and I add the webadmin part : since v1.10 (as I recall), the box repo doesn't contain the webadmin part anymore, but search for the directory ../webadmin.
On v2.0.*, in the start.sh script, at line 218, it tries to create a json in the /home/yellowtent/box/webadmin/dist/ directory, but it does not exists! A directory /home/yellowtent/box/dashboard/dist/ exists (it might be related to the webadmin repo renamed into dashboard, but depite my investigations on the code, I didn't find where it is created).
The installation fails because of this error, and after that, I've got a 404 error on trying to access to the cloudron, and I can't connect with cloudron CLI.
I tried to re-run the script after creating a webadmin/dist directory, but it gives me a 403 error (most certainly because I didn't passed the right parameters, thus many of my cloudron.conf fields are empty). I still have access to cloudron CLI and to all my apps. I just don't have the cloudron page anymore.Do any of you have an idea on how to fix this issue and correctly update to 2.0.1 ?
Thanks!UPDATE :
I browsed the code of both box and webadmin for v2.0.0 : dashboard is never used, but webadmin is. So I copied the created config.json from webadmin/dist to dashboard/config.json, renamed dashboard into webadmin, and restarted box service. The interface came back (yay!) but it gives me an "Unknown OAuth client" error. It may be because of my config/cloudron.conf. Or do I have to restart another service.