Tag: software errors / problem solutions

Password and unix_socket authentication in MySQL and MariaDB. Error “#1698 - Access denied for user ‘root’@’localhost’” (SOLVED)

How to fix #1698 - Access denied for user ‘root’@’localhost’

In the latest versions of MariaDB (possibly MySQL), unix_socket authentication is used by default. If you are not familiar with it, then you might have encountered the error “#1698 - Access denied for user ‘root’@’localhost’”.

Let's take a look at what unix_socket authentication is in MySQL and MariaDB, and how to fix bug #1698.

MySQL and MariaDB unix_socket authentication

The essence of unix_socket authentication is that if the user has already logged into the system, then he does not need to enter a password when connecting to the DBMS, since his authenticity has already been verified when logging into the OS.

In practice, most people work as a regular user and connect to MySQL as root. As a result, the above error occurs.

You can choose one of the options:

1. Always use sudo when connecting as root.

2. Make changes to the MySQL settings so that ordinary users can connect to the DBMS.

3. Create a MySQL user with the same name as your system username

How to check which authentication method is being used

To view the used authentication method, you can use the following SQL query:

select * from mysql.global_priv where User='root';

Or this, for greater clarity of the output:

SELECT CONCAT(user, '@', host, ' => ', JSON_DETAILED(priv)) FROM mysql.global_priv where user='root';

You can see that mysql_native_password and unix_socket are set as plugin:

{
    "access": 18446744073709551615,
    "plugin": "mysql_native_password",
    "authentication_string": "invalid",
    "auth_or": 
    [
        
        {
        },
        
        {
            "plugin": "unix_socket"
        }
    ]
} 

With this configuration, only unix_socket authentication worked for me.

Enabling and disabling unix_socket authentication

You can switch to password authentication with the following SQL query:

ALTER USER 'root'@'localhost' IDENTIFIED BY 'PASSWORD';

Please note that you need to enter the PASSWORD.

To switch to unix_socket authentication, execute the following SQL query:

ALTER USER 'root'@'localhost' IDENTIFIED VIA unix_socket;

Let's check:

SELECT plugin from mysql.user where User='root';

If mysql_native_password is output, it means that password login is being used.

In fact, unix_socket authentication can be combined with password authentication, but I will not dwell on that.

Replacement for “update user set plugin='' where User='root';”

Previously, a similar effect - changing authentication from unix_socket to password authentication - was achieved using a sequence of commands:

Connecting to MySQL Server:

sudo mysql

At the MySQL prompt, you had to run the commands:

use mysql;
update user set plugin='' where User='root';
flush privileges;
exit

Then the service had to be restarted:

sudo systemctl restart mysql.service

And it was possible to connect without sudo.

mysql -u root -p

In the case shown above, the authentication method was also changed from unix_socket to password, but the new password was not set. If you want the same effect (although it becomes insecure after disabling authentication with unix_socket), then you can run the following requests (i.e. set an empty password):

use mysql;
ALTER USER 'root'@'localhost' IDENTIFIED BY '';
exit

Choosing an authentication method when creating a user

You can create a user with password authentication with an SQL query of the following form:

CREATE USER USERNAME@HOST IDENTIFIED BY 'PASSWORD';

To create a user with unix_socket authentication, execute the following SQL query:

CREATE USER USER@HOST IDENTIFIED VIA unix_socket;

How to run “bundle install” as root

The “bundle install” command does not allow you to run it as root. For example, the following sequence of commands will fail:

gem update --system
xcode-select --install
gem install nokogiri
bundle install

The last command will show an error:

Don't run Bundler as root. Bundler can ask for sudo if it is needed, and
installing your bundle as root will break this application for all non-root
users on this machine.
Could not locate Gemfile

This can be a real problem if the primary user on your computer is root. On some servers, the default user is root.

bundle install” has no options to ignore the command being run with elevated privileges. But there is still a way to get around this problem - create a new user and execute the command on his behalf.

To create a new user in Debian, Kali Linux, Linux Mint, Ubuntu, run a command like this:

sudo useradd -m -G sudo -s /bin/bash NEW_USER

To create a new user in Arch Linux, Manjaro, BlackArch and their derivatives, run a command like:

sudo useradd -m -g users -G wheel,video -s /bin/bash NEW_USER

After that, it is enough to sign in as a new user:

su - NEW_USER

And run bundle again:

bundle install

This time the command will end successfully.

To return to the root user, that is, log out of the new user session, press Ctrl+d.

Warning: apt-key is deprecated (SOLVED)

The apt-key command manages keys that are responsible for verifying the signature of application package repositories.

Now, whenever you use the apt-key command, you will receive the message:

Warning: apt-key is deprecated. Manage keyring files in trusted.gpg.d instead (see apt-key(8)).

It means that the apt-key program is now deprecated. Now we should use trusted.gpg.d to manage keyfiles. Translated into human language, now we have to add files ourselves to the /etc/apt/trusted.gpg.d/ folder.

This method will use the /etc/apt/trusted.gpg.d/ directory to store the public GPG key ring files. It has been available since early 2017.

If you look at the recommended man page (man apt-key), it says that this command and all its functions are deprecated.

There are two options for how you can proceed in this situation.

You can continue to use apt-key

Despite the assurances in the documentation, the apt-key program works as usual and performs all its functions.

At the same time, the apt-key command will not be removed for quite a long time, at least several years. It may not be removed at all for compatibility.

Therefore, basically, you can ignore the warning “apt-key is deprecated”.

How to add keys in a new way

The new “modern” version is poorly documented, let's try to fill this gap.

Now the keys need to be added with the following commands.

If a remote key file is added:

curl -s URL | sudo gpg --no-default-keyring --keyring gnupg-ring:/etc/apt/trusted.gpg.d/NAME.gpg --import

If a local key file is added:

cat URL.pub | sudo gpg --no-default-keyring --keyring gnupg-ring:/etc/apt/trusted.gpg.d/NAME.gpg --import

In these commands, you need to substitute:

  • URL - address of the .pub file
  • NAME - you can choose any file name
  • FILE - filename of the .pub file

Then be sure to run the following command to set the correct file permissions:

sudo chmod 644 /etc/apt/trusted.gpg.d/NAME.gpg

Example. If you already know the URL of the required public key, use wget or curl to download and import it. Remember to update the file permissions from 600 to 644.

curl -s https://dl-ssl.google.com/linux/linux_signing_key.pub | sudo gpg --no-default-keyring --keyring gnupg-ring:/etc/apt/trusted.gpg.d/earth.gpg --import
sudo chmod 644 /etc/apt/trusted.gpg.d/earth.gpg

Alternatively, you can get the key from the keyserver:

sudo gpg --no-default-keyring --keyring gnupg-ring:/etc/apt/trusted.gpg.d/rabbit.gpg --keyserver keyserver.ubuntu.com --recv 6B73A36E6026DFCA
sudo chmod 644 /etc/apt/trusted.gpg.d/rabbit.gpg

How to view information about installed keys

To view information about the installed key, run a command of the form:

gpg --list-keys --keyring /etc/apt/trusted.gpg.d/FILE.gpg

For instance:

gpg --list-keys --keyring /etc/apt/trusted.gpg.d/earth.gpg

As said, the old command also works:

apt-key list

How to remove a key added by a new method

If you need a command analogue:

sudo apt-key del 7D8D08F6

Now, to remove the key, simply delete the file with commands like:

cd /etc/apt/trusted.gpg.d/
sudo rm NAME.gpg

But “apt-key del” also works.

How to remove a key added with apt-key add

If you want to delete individual keys, then use a command like this:

sudo apt-key del KEY_ID

To find out the KEY_ID, run the command

apt-key list

find the key you want, for example:

/etc/apt/trusted.gpg
--------------------
pub   rsa4096 2016-04-12 [SC]
      EB4C 1BFD 4F04 2F6D DDCC  EC91 7721 F63B D38B 4796
uid         [ неизвестно ] Google Inc. (Linux Packages Signing Authority) <linux-packages-keymaster@google.com>
sub   rsa4096 2019-07-22 [S] [   годен до: 2022-07-21]

Look at the sequence of numbers and letters in the pub field - this is a hash. In this example, we are interested in the line

      EB4C 1BFD 4F04 2F6D DDCC  EC91 7721 F63B D38B 4796

To delete this key, you need to run the command (note that spaces have been removed from the hash):

sudo apt-key del EB4C1BFD4F042F6DDDCCEC917721F63BD38B4796

How to remove all keys added with apt-key add

Just delete the /etc/apt/trusted.gpg file:

sudo rm /etc/apt/trusted.gpg

Error “Cannot load modules/libphp7.so” (SOLVED)

Some Linux distributions have already started migrating to PHP 8. In some distributions the new version of PHP removes the old one, as a result of which the web server may stop working due to the fact that the files specified in the web server configuration are missing or renamed.

Examples of errors you may encounter:

httpd: Syntax error on line 504 of /etc/httpd/conf/httpd.conf: Syntax error on line 1 of /etc/httpd/conf/mods-enabled/php.conf: Cannot load modules/libphp7.so into server: /etc/httpd/modules/libphp7.so: cannot open shared object file: No such file or directory

It says the file /etc/httpd/modules/libphp7.so was not found.

Another error that says the /etc/httpd/conf/extra/php7_module.conf file was not found:

httpd: Syntax error on line 504 of /etc/httpd/conf/httpd.conf: Syntax error on line 2 of /etc/httpd/conf/mods-enabled/php.conf: Could not open configuration file /etc/httpd/conf/extra/php7_module.conf: No such file or directory

On some distributions, the Apache web server service is called apache2, and on some httpd. Therefore, this guide will consider both options.

Fix “Cannot load modules/libphp7.so” when webserver service is named httpd (Arch Linux, CentOS and their derivatives)

To view the status of the service and the errors that led to its inoperability, run the command:

systemctl status httpd.service

Open the config file /etc/httpd/conf/mods-enabled/php.conf:

sudo vim /etc/httpd/conf/mods-enabled/php.conf

Find the line in it

LoadModule php7_module modules/libphp7.so

and replace it with:

LoadModule php_module modules/libphp.so

Then find the line

Include conf/extra/php7_module.conf

and replace with:

Include conf/extra/php_module.conf

Restart the web server service:

sudo systemctl restart httpd.service

and check its status:

systemctl status httpd.service

Fix “Cannot load modules/libphp7.so” when webserver service is named apache2 (Debian, Ubuntu, Linux Mint, Kali Linux and their derivatives)

To view the status of the service and the errors that led to its inoperability, run the command:

systemctl status apache2.service

Disable PHP 7.* module:

a2dismod php7.4

Maybe you have a different version of PHP, start typing “a2dismod php” and use the TAB key for autocompletion:

To enable PHP 8 use a command like (use the TAB key for auto-completion):

a2enmod php8

Restart the web server service:

sudo systemctl restart apache2.service

and check its status:

systemctl status apache2.service

VirtualBox shared folder is read-only (SOLVED)

VirtualBox Shared Folder allows you to easily exchange files between a virtual machine and a real computer.

By default, the contents of the Shared Folder are owned by the root user. Therefore, the files in the shared folder are read-only for regular users. The following manual shows how to make the VirtualBox shared folder read/write accessible to regular users.

1. Install VirtualBox Guest Additions.

Without Guest Additions, shared folders won't work properly.

2. Add a Shared Folder if you haven't already.

3. Make sure the “Read-only” checkbox is unchecked in the Shared Folder settings.

4. Add your user to the vboxsf group:

sudo usermod -a -G vboxsf $USER

Restart your computer for the changes (the user is added to the group) to take effect.

In theory, this should be enough for the shared folder to become writable. That is, the point is that the folder is mounted so that the group that it belongs to is vboxsf. Users in this group can edit the contents of the folder.

But on some distributions the folder is mounted as owned by the root user and owned by the root group. In this case, regular users have read permissions to the contents of the shared folder, but they cannot edit files in it, create new files, or delete existing ones.

The vboxsf filesystem has uid= and gid= mount options, you can try them with commands like:

sudo mount -t vboxsf -o 'uid=1000,gid=141' SHARE_NAME /PATH/TO/POINT/MOUNT

Or add a line like this to the /etc/fstab file:

SHARE_NAME	/PATH/TO/POINT/MOUNT	vboxsf	gid=141	0	0

But both of these methods did not work in my case.

I tried to change the owner of the folder and its contents using chown:

echo $USER
mial

sudo chown -R mial /mnt/share

But this did not work either - the owner of this folder was still root.

The only way to make the folder readable was by changing the access rights to it with chmod.

This command will create and modify new files and directories in the shared folder:

sudo chmod 777 /PATH/TO/POINT/MOUNT

Conclusion

Note that changing file permissions changes them not only for the virtual computer, but for the real one too! Therefore, the method described above cannot be considered ideal. If you have any thoughts on how to force the shared folder to be mounted with the vboxsf group, then write in the comments!

How to choose the default Java version in Arch Linux

Several versions of the JDK and OpenJDK are available in the standard Arch Linux repositories (and derivative distributions). You can install one or more of them. Even if you have the latest version installed, some programs may install a different version of the JDK as their dependency - multiple versions are allowed, they do not cause conflicts.

After that, you can see which of these versions is used by default, and also change it using the archlinux-java program.

Usage:

archlinux-java <COMMAND>

As a COMMAND it can be:

	status		List installed Java environments and enabled one
	get		Return the short name of the Java environment set as default
	set <JAVA_ENV>	Force <JAVA_ENV> as default
	unset		Unset current default Java environment
	fix		Fix an invalid/broken default Java environment configuration

Start by viewing the status:

archlinux-java status

As you can see, I have two Java environments available

  • java-11-openjdk
  • java-14-openjdk

And no Java environment is selected as the default.

I set java-14-openjdk as my default environment:

sudo archlinux-java set java-14-openjdk

I check again:

archlinux-java status

As you can see, java-14-openjdk is now used - the word (default) indicates this.

Errors: command java, javac or javap not found

When trying to start one of the following programs, you may encounter errors:

java
bash: java: command not found
# OR
bash: /usr/bin/java: No such file or directory

javac
bash: javac: command not found
# OR
bash: /usr/bin/javac: No such file or directory

javap
bash: javap: command not found
# OR
bash: /usr/bin/javap: No such file or directory

If you have already installed the JDK, then you need to select the version that will be used by default. This can be done using archlinux-java as shown above. After that, the error will disappear.

bash: finger: command not found in Arch Linux (RESOLVED)

The finger command displays information about the users logged in to the system, extended information - full name, logon time, inactivity time and some others.

By default, the finger command is not installed on many distributions, so when trying to run:

finger

You may receive an error:

bash: finger: command not found

In fact, it is not necessary to install finger, instead you can use the pinky program, which is installed in almost all distributions:

pinky

However, you can install finger if you want.

How to install finger on Arch Linux, Manjaro

On Arch Linux, BlackArch and Manjaro, you need to install finger from the Arch User Repository (AUR). To do this, first install the pikaur program as described in the “Automatic installation and update of AUR packages” article.

Then run the following command to install finger:

pikaur -S netkit-bsd-finger

If you want to install finger with IPv6 and other Debian patches, run the following command:

pikaur -S netkit-bsd-finger-ipv6

How to downgrade to a previous kernel in Arch Linux

New Linux kernels bring support for new hardware and new features. But sometimes the kernel causes problems: it is completely or partially incompatible with existing software, especially video drivers often suffer from this, but this can also apply to any other software.

A very recent example: NVIDIA drivers are partially incompatible with linux >= 5.9 at the time of writing. Although the graphics card works, CUDA, OpenCL and probably other functions are broken. Of course, someday this will be fixed, but what about those who need CUDA and OpenCL or other programs incompatible with the latest version of the Linux kernel?

One of the options is to roll back to the previous version by installing it from the downloaded package cache. The method is not the most pleasant, since you will need to prohibit updating the package, the version of which was rolled back, or even refuse to update the entire system.

This method is especially annoying when it comes to the linux kernel - you also need to do something with dependencies.

One of the easier options is to switch to the linux-lts kernel.

LTS - stands for Long Time Support. Simply put, it is the kernel and modules of Linux from one of the previous versions, which is rarely updated.

This kernel can be installed as a regular package, replacing the existing kernel. Depending on the configuration of your computer, you may need to install other *-lts packages, such as the nvidia-lts package, the NVIDIA video driver for the linux-lts kernel.

Also install linux-lts-headers.

After finalizing the kernel and fixing the problems that bother you, you can return to the latest version of the Linux kernel.

Analogue of the --force option in pacman

If you update or install a new package, then if the files included in this package are already present in the file system, the update/install operation is aborted and the files that are already present in the system are displayed. In my practice, the reasons are usually packages installed with pip and the same packages that are trying to install with pacman - in which case the package files already exist and installation is not possible.

By the way, in this case, you definitely don't need to use the --force option, but you need to remove the interfering package with a command like:

pip uninstall PACKAGE

Previously, pacman had the --force option - if you specify it along with the install/update command, then it overwrites files already existing on the system.

There are situations when it is really necessary. But the maintainers of the distribution considered that the --force option only harm. And they removed it. It was replaced by the --overwrite <PATH> option. It overwrites conflicting files (can be used multiple times).

That is, according to the authors' idea, it should be used with each file separately, so that we have a clear idea of what exactly we are rewriting.

I ran into a situation where my OS stopped booting. I tried to uninstall GNOME Display Manager and found out that this package is considered to be NOT INSTALLED. Consequently, it could not be updated, and also some of its dependencies were removed as orphaned. That is, the system does not boot explicitly due to GDM, the easiest way to fix is to completely uninstall and do a clean install. But pacman doesn't uninstall GDM because it doesn't think it is installed. But on the other hand, when you try to install the gdm package, pacman does not allow you to install it, since the files of this package are present on the system.

Now it's time to use the --overwrite option, but there are a lot of GDM files and it is impossible to list them manually. In general, by experience, I found out that you can specify folders, for example:

sudo pacman -S --overwrite /usr/share/locale/* gdm

And to get a complete analog of --force, you need to specify it as “--overwrite '/*'”.

I ran the following commands to push install, complete uninstall and clean install of GNOME Display Manager:

sudo pacman -S --overwrite '/*' gdm
sudo pacman -Rn gdm
sudo pacman -S gdm

As a result, my problem was resolved. Do not overuse “--overwrite '/*'”, use this syntax only when you really understand what you are doing and when you are sure that there is no other way to remove interfering files - usually this can be done by third-party Python, PERL and others package managers - those the ones that installed these packages.

Error in LMDE “cryptsetup: WARNING: The initramfs image may not contain cryptsetup binaries nor crypto modules” (SOLVED)

When updating the system, if it was necessary to rebuild initramfs (usually necessary after every Linux kernel update), a warning appeared in LMDE and other Linux distributions. This is not a critical warning and not an error – in fact, this is information about incorrect system configuration. Example of this notification:

cryptsetup: WARNING: The initramfs image may not contain cryptsetup binaries
    nor crypto modules. If that's on purpose, you may want to uninstall the
    'cryptsetup-initramfs' package in order to disable the cryptsetup initramfs
    integration and avoid this warning.

The message in the screenshot says, “that the initramfs image may not contain cryptsetup executables, nor crypto modules. If this is intended, then you can remove the cryptsetup-initramfs package to disable the integration of cryptsetup and initramfs and so that this warning disappears. "

Now cryptsetup and its dependencies are added to the initramfs image only when a device is found that needs to be unlocked at the initramfs stage.

Depending on your plans, there are two options:

  • if you don’t know if in the future you will create encrypted partitions or partitions that should be unlocked at the initramfs stage, then just do nothing and do not change – ignore this informational message, it is harmless
  • if you definitely won’t create encrypted partitions, then delete cryptsetup-initramfs package

So, if you would not create encrypted partitions and do not plan to do this, then you can remove the cryptsetup-initramfs package:

sudo apt remove cryptsetup-initramfs
sudo apt autoremove

By the way, you can reinstall the cryptsetup-initramfs package later if you need it.

Loading...
X