Category: Linux

How to download a package without installation in Arch Linux and Manjaro. How to download the AUR package source code

How to download a package with pacman (from standard repositories)

To download a package without installing it, use the -w option:

sudo pacman -Sw PACKAGE

By default, the package will be downloaded to pacman's package cache directory, with the --cachedir option you can specify any other directory to save the package to:

sudo pacman -Sw --cachedir DIRECTORY PACKAGE

For example, the following command will download the iw package installation file to the current directory (--cachedir .):

sudo pacman -Sw --cachedir . iw

How to download an installation package and source code from the AUR

See also:

Packages from the Arch User Repository (AUR) are not so simple, since there are no ready-made installation packages in the AUR. Instead of installation packages, the AUR necessarily contains PKGBUILD files, which contain commands for building the package to be installed. In addition to the PKGBUILD file, other necessary files may also be present, such as patches for modifying the source code. Source code files and binaries are usually absent from the AUR repositories, instead, links and commands for downloading all necessary files and source code are written in the PKGBUILD file.

Therefore, there may be several options for downloading AUR packages:

  • package repository (PKGBUILD file and other related files)
  • all files needed to build the package (source code files and other files downloaded in PKGBUILD)
  • a ready-to-install package that is not available elsewhere and is built directly on the user's computer

Let's consider all these situations.

How to download the AUR repository

To download (clone) a repository from the AUR, you need to know its URL. The repository address can be viewed with a command like:

pikaur-Si PACKAGE

For example:

pikaur -Si deadbeef-git

In the output of the previous command, notice the line “AUR Git URL”:

AUR Git URL     : https://aur.archlinux.org/deadbeef-git.git

To download (clone) use the following command:

git clone AUR_GIT_URL

For example:

git clone https://aur.archlinux.org/deadbeef-git.git

How to download AUR source code

Consider the following problem:

I need to change the source code in the program (meaning not the PKGBUILD file). How to download source files and unzip them?

To download the source code, you need to start by cloning the AUR repository, for this you need to know its URL. The repository address can be viewed with a command like:

pikaur-Si PACKAGE

For example:

pikaur -Si deadbeef-git

In the output of the previous command, notice the line “AUR Git URL”:

AUR Git URL     : https://aur.archlinux.org/deadbeef-git.git

To download (clone) use the following command:

git clone AUR_GIT_URL

For example:

git clone https://aur.archlinux.org/deadbeef-git.git

Go to the folder with the downloaded repository

cd deadbeef-git/

To download and extract files, use the following command:

makepkg -o

If you want to skip dependency checking, then add the -d option:

makepkg -od

The result of the previous commands will be to download the source code files needed to build the setup file from the AUR.

You can edit the files to suit your needs, and then build the installation file and install the package from it with the following command:

makepkg -si

If, as a result of your actions, the package cannot be built due to a checksum mismatch, then use the following options:

  --nocheck        Do not run the check() function in the PKGBUILD
  --skipchecksums  Do not verify checksums of the source files
  --skipinteg      Do not perform any verification checks on source files
  --skippgpcheck   Do not verify source files with PGP signatures

How to download the installation file from the AUR

As mentioned above, this is a slightly incorrect formulation of the problem, since the installation files are missing in the AUR.

To get the installation file, use the following set of commands:

git clone AUR_GIT_URL
cd PACKAGE_DIRECTORY
makepkg -s

For example, downloading the source code, compiling the program, and building the installation package for deadbeef-git:

git clone https://aur.archlinux.org/deadbeef-git.git
cd deadbeef-git/
makepkg -s

The command completed without errors:

As a result of the command, a file with the *.pkg.tar.zst extension was created (in this case, it is deadbeef-git-r10944.4469d86c7-1-x86_64.pkg.tar.zst):

How to set up Favorites and add folders in Wine File Manager? (SOLVED)

Wine File Manager is similar to Windows Explorer. It can be opened with the command

winefile

There you can see several labels and drives.

Among the labels you will find:

  • My Computer
  • Documents
  • Trash
  • / (Linux filesystem root)

My Computer contains all disks connected to Linux. The “C:” drive is what is located in the ~/.wine/drive_c/ folder. The “Z:” drive is the root drive of the Linux file system. Other letters are flash drives and disks connected to Linux.

The root element of shortcuts is Desktop. This refers to the Linux desktop, not Windows (Wine) desktop.

That is, if you want a new folder to be visible in the Wine File Manager, then create it on your Linux desktop, for example:

mkdir ~/Desktop/Favorites/

You can copy any files to this folder for quick access.

You can also create shortcuts (symlinks) in this folder to files and programs both in the Wine file system and outside it.

Command to create a shortcut:

ln -s TARGET DIRECTORY

For example, the following command will create a link to ~/.wine/drive_c/windows/notepad.exe in the ~/Desktop/Favorites/ folder:

ln -s ~/.wine/drive_c/windows/notepad.exe ~/Desktop/Favorites/

To add the Downloads, Videos, Music folders next to the Documents folder in the Explorer folder tree, you can create the appropriate links:

ln -s ~/Downloads/ ~/Desktop/
ln -s ~/Music/ ~/Desktop/
ln -s ~/Videos/ ~/Desktop/

If you want to change drive letters, then run Wine configuration:

winecfg

And go to the Drives tab to customize the display of drives in the Wine File Manager:

How to make VirtualBox virtual machines destroy on computer restart

How to use VirtualBox on Linux so that virtual machines and their settings are not saved

The desire to completely destroy virtual machines is extraordinary and may be related to security and privacy. However, there are at least two ways to achieve the desired effect: the virtual machines will be destroyed as soon as the computer is turned off.

1. Using VirtualBox on a Live System

If you need VirtualBox without saving settings, then you can work in a Live system.

Boot into Live mode, run the command to install VirtualBox:

sudo apt install virtualbox virtualbox-ext-pack

After the command completes, you can start VirtualBox, create virtual machines in it and work in them.

On the next reboot, all changes made will be lost.

To get VirtualBox again, repeat the previous steps exactly.

2. Saving virtual machines in the /tmp directory

The second method involves using a regular Linux installation or Persistence.

If you are working with a Live system, select “Live USB Persistence” or “Live USB Encrypted Persistence” when booting.

Install VirtualBox:

sudo apt install virtualbox virtualbox-ext-pack

Then open VirtualBox and go to menu File → Preferences → General.

Set “Default Machine Folder” to /tmp

As a result, all virtual machines will store their settings in the /tmp directory.

On each reboot, the /tmp directory is automatically cleared.

As a result, after the reboot, the VirtualBox executable files will remain in the system, but all virtual machines will be deleted.

If you are running a Live system, you will also need to select “Live USB Persistence” or “Live USB Encrypted Persistence” on subsequent reboots.

Why doesn’t Linux with Persistence keep settings after reboot? (SOLVED)

Persistence is a partition on a flash drive with a Live Linux system, thanks to which the programs and settings installed on the system are saved.

A live image with a Linux distribution can be written to a USB flash drive. The result is a bootable USB flash drive with Linux. That is, you can boot into Linux and use the programs and tools of this operating system. A feature of the Live image is that you can install programs, save files and make other changes and settings in the operating system. All settings are stored in the virtual file system. That is, within one session, work in the Live system does not differ from work in any other Linux. But immediately after reboot, all changes made will be lost and the system will return to its initial state.

In order for files and changes in the OS to be saved between reboots, you can create a Persistence partition, in which all changes will be stored. For an example of how to create a bootable USB with Persistence, see “How to make Kali Linux 2022 Live USB with Persistence and optional encryption (on Windows)”.

After creating Persistence, all changes should be “remembered” and be visible after a reboot. But some users are faced with the fact that even after creating a persistent storage, files and installed programs disappear after a reboot. What could be the problem?

The fact is that even after creating the Persistence volume, the user still has several boot options, for example:

  • Live system – OS boot without using persistent storage
  • Live USB Persistence – OS boot using persistent storage
  • Live USB Encrypted Persistence – OS boot using encrypted persistent storage

As you might guess, if you select “Live system” from the boot menu, then your system will be a regular Live system that does not use Persistence. In this case, the Live system option is the first item in the boot menu, that is, it will be used by default if no other option is selected.

Thus, to see the changes made before rebooting Linux, select one of the following options from the boot menu:

  • Live USB Persistence
  • Live USB Encrypted Persistence

“Failed - Network error” when exporting from phpMyAdmin (SOLVED)

phpMyAdmin allows ones to export databases and individual tables (as well as individual rows) in the web interface.

In the latest stable version of phpMyAdmin 5.1.3, users encountered a problem when exporting data from tables and databases to a file.

Regardless of the selected settings, instead of downloading the file, an error is shown:

Failed - Network error

This is what the error looks like in a web browser:

This bug has been fixed in phpMyAdmin 5.3, which is available as a snapshot version at the time of writing.

You can download phpMyAdmin 5.3 from the direct link: https://files.phpmyadmin.net/snapshots/phpMyAdmin-5.3+snapshot-all-languages.zip

Or go to the download page of the phpMyAdmin site and select the latest version there: https://www.phpmyadmin.net/downloads/

After unpacking phpMyAdmin in the web server folder, no additional configuration is required - the export of databases and tables works again.

“Error: failed to commit transaction (invalid or corrupted package)” (SOLVED)

When updating or installing packages in Arch Linux, Manjaro and their derivatives, you may encounter the “error: failed to commit transaction (invalid or corrupted package). Errors occurred, no packages were upgraded.” issue.

Full error log:

:: Retrieving packages...
 libinih-55-2-x86_64                                15.4 KiB   385 KiB/s 00:00 [############################################] 100%
(40/40) checking keys in keyring                                               [############################################] 100%
(40/40) checking package integrity                                             [############################################] 100%
error: libinih: signature from "Maxime Gauduin <alucryd@gmail.com>" is marginal trust
:: File /var/cache/pacman/pkg/libinih-55-2-x86_64.pkg.tar.zst is corrupted (invalid or corrupted package (PGP signature)).
Do you want to delete it? [Y/n] y
error: failed to commit transaction (invalid or corrupted package)
Errors occurred, no packages were upgraded.

In this case, the error occurs when trying to update the libinih package, but it can occur for other packages as well.

First, try delete the package, as recommended, and run the update again to download the installation package file. This will resolve the issue if the error is caused by a corrupted package, such as a network outage.

If this does not help, then instead of a complete update of the system, run the update of the archlinux-keyring package:

sudo pacman -Sy archlinux-keyring

This should fix the PGP signature verification issue.

This error and the problem with incorrect PGP signature can occur on systems that are rarely updated (updated with long breaks). The error lies in the fact that packages with “invalid” PGP signatures are signed with keys that are contained in the updated version of the archlinux-keyring package. Therefore, by starting with updating archlinux-keyring you get new versions of the keys, which then successfully verify the PGP signatures of the package files.

What happens if an IPv4 client tries to access an IPv6-only server (SOLVED)

Question:

Hey! The article says that IPv6 is a completely different protocol, I had a question. If my recipient's email works only on IPv6 (that is, his mail server listens only through the IPv6 protocol), does this mean that when sending a letter from a mail server that is connected only to IPv4, the letter simply will not reach the recipient, that is, I will have to choose some kind of mail service whose mail server works with both IPv6 and IPv4 so that my friend can read my letter?

Answer:

The considered situation, when one server has only an IPv4 address, and the second server has only an IPv6 address, is purely theoretical. ISPs that use IPv6 and provide IPv6 addresses to customers also provide IPv4 addresses at the same time.

For example, this router is connected to an ISP that supports IPv6. However, a router has two types of IP addresses:

  • 10.241.24.29
  • 2001:fb1:fc0:135:20e8:31d0:4821:6624

My computer is connected to this router, so it also has two types of IP addresses:

  • 192.168.1.58
  • 2001:fb1:139:20d8:82c0:cb25:b750:24d4

Note that IPv4 and IPv6 are such separate networks that for IPv6, the router has its own DNS server IP – 2001:fb0:100::207:49.

The same is true for hosting providers. For example, ISPs in my country do not support IPv6. But at the same time, hosting providers in my country have been supporting IPv6 for a very long time (for example, I set up IPv6 for SuIP.biz back in 2016, while one rented VPS server came with one free IPv4 and 3 free IPv6).

You can search for websites with IPv6 support and look at their DNS records – you will see that in addition to the AAAA record (IPv6 address of the site), there is also an A record for the site (IPv4 address of the site).

That is, yes, if one of the computers (client or server1) is connected only to an IPv4 network, and the second computer (server or server2) is connected only to an IPv6 network, then theoretically it is simply impossible to build a network route between them from the first to the second. But in practice, this problem does not arise simply for the reason that absolutely all clients and servers support IPv4, and some also support IPv6. That is, all possible combinations work according to one of the following options:

  • client and server support IPv6 – IPv6 is used
  • client supports IPv6 and server does not support IPv6 – IPv4 is used
  • client does not support IPv6 and server supports IPv6 – IPv4 is used
  • client does not support IPv6 and server does not support IPv6 – IPv4 is used

However, it is possible to isolate an IPv6-enabled server from an IPv4 network, which is what I talk about in the section “How to configure SSH to work with IPv6 only”.

In short: IPv4 and IPv6 are two different networks, even though they run on the same wires and on the same hardware.

If you're interested in a specific error, when you try to open an IPv6-only site from an IPv4-only client, you get the “Network is unreachable” error.

Another example of an error: if you try to run the following command from an IPv6-enabled network:

sudo nmap -6 suip.biz

then the host suip.biz will be scanned.

If you run the same command from a network without IPv6 support, an error will be displayed: “setup_target: failed to determine route to suip.biz (2a02:f680:1:1100::3d60)”.

See also detailed IPv6 guides:

Online services with IPv6 support:

How to edit the Access denied page in Squid? How to insert custom pictures and mail?

The custom Access denied page can only be shown if the user connects via HTTP. For HTTPS connections (which are currently the vast majority), it is impossible to change the displayed page (that is, display the configured Access denied page) due to the very nature of HTTPS, which is precisely designed to ensure that the transmitted data cannot be modified.

That is, you can edit the Access denied page in Squid, but it will only show up on the few occasions when an HTTP connection is made.

For HTTPS connections, a standard web browser page will be displayed with a message like “The proxy server is refusing connections”.

That is, it can be stated that the custom Access denied page in Squid will be used quite rarely and its setting can be attributed rather to outdated functionality.

Squid has page templates with various messages, including denied access, in various languages. For example: /usr/share/squid/errors/en/ERR_ACCESS_DENIED (“ERROR: The requested URL could not be retrieved”).

You can edit this page like a regular HTML file.

This page uses codes to insert into the template, for example:

  • %U
  • %c
  • %w
  • %W

The meaning of these codes, as well as many other codes, can be found on the following page: https://wiki.squid-cache.org/Features/CustomErrors

How to set Squid cache manager e-mail?

If you only want to specify the e-mail address of the Squid cache manager, then you do not need to edit the template files. You can use the following directives:

  • cache_mgr is email-address of local cache manager who will receive mail if the cache dies. The default is “webmaster”.
  • email_err_data – if enabled, information about the occurred error will be included in the mailto links of the ERR pages (if %W is set) so that the email body contains the data. Syntax is <A HREF="mailto:%w%W">%w</A>. It is already enabled by default, so no further configuration is required.

See also the complete guide: How to create and configure a Squid proxy server

What is the difference between “systemctl reboot” and “reboot” and “systemctl poweroff” and “poweroff”

What's the difference between

sudo systemctl reboot

And

sudo reboot

Is it true that the use of commands depends on the operating system, and that one will execute a shorthand version, the other will use systemctl?

Answer:

The halt, poweroff, reboot commands are implemented to maintain basic compatibility with the original SysV commands. Verbs

  • systemctl halt
  • systemctl poweroff
  • systemctl reboot

provide the same functionality with some additional features.

That is, reboot is now also systemctl. You can verify this:

which reboot
/usr/sbin/reboot

file /usr/sbin/reboot
/usr/sbin/reboot: symbolic link to /bin/systemctl

That is, the reboot command is actually a symbolic link to systemctl.

In turn, the command

systemctl reboot

is an abbreviation for

systemctl start reboot.target --job-mode=replace-irreversibly --no-block

That is

reboot

this is exactly the same as

systemctl reboot

as well as

systemctl start reboot.target --job-mode=replace-irreversibly --no-block

This is true for distributions that have switched to systemd (for example, Arch Linux, the entire Debian family, including Ubuntu). That is, for most modern distributions, except for those on which SysV remained.

In some cases, the reboot command does not work – see Error “Failed to talk to init daemon” for details. In this case, to restart the computer, you must add the -f option:

reboot -f

The shutdown command is:

poweroff -f

Even if these commands did not help, then use the options with the double option -f.

To turn off your computer do:

poweroff -f -f

Or restart your computer with the command:

reboot -f -f

The -f option means forced immediate stop, shutdown, or reboot. When specified once, this results in an immediate but clean shutdown by the system manager. If specified twice, it results in an immediate shutdown without contacting the system manager.

When using the -f option with systemctl halt, systemctl poweroff, systemctl reboot, or systemctl kexec, the selected operation is performed without shutting down all units. However, all processes will be forcibly terminated, and all file systems will be unmounted or remounted read-only. Therefore, it is a radical, but relatively safe option to request an immediate restart. If you specify --force twice for these operations (except for kexec), they will be executed immediately, without killing any processes or unmounting any filesystems. Warning: specifying --force twice for any of these operations can result in data loss. Note that if you specify --force twice, the selected operation is performed by systemctl itself and is not associated with the system manager. This means that the command must be executed even if the system manager fails.

How to convert JPG to PDF

The article “How to convert PDF to JPG using command line in Linux” shows how to split a PDF file into separate pages while converting them to images.

But what if you want to do the opposite? How to assemble JPG images into a PDF file? This article is devoted to this, which will tell you how to create a single PDF document from JPG files.

The convert utility from the ImageMagick package does a great job of combining images (JPG and other formats) into PDF.

On Debian, Linux Mint, Ubuntu, Kali Linux and their derivatives, you can install this package with this command:

sudo apt install imagemagick

On Arch Linux, Manjaro and their derivatives, run the following commands to install:

sudo pacman -S imagemagick
# To support other formats such as JPEG (JPEG XL and JPEG2000), HEIF, DNG, SVG, WEBP, WMF, OpenRaster, OpenEXR, DJVU, install the following dependencies:
sudo pacman -S ghostscript libheif libjxl libraw librsvg libwebp libwmf libxml2 libzip ocl-icd openexr openexr openjpeg2 djvulibre pango

To convert a single image to PDF, run the following command:

convert PICTURE.jpg RESULT.pdf

Example:

convert PL48536179-5.jpg out.pdf

You can specify multiple input .jpg files at once, for example:

convert PL48536179-5.jpg PL48536179-6.jpg PL48536179-7.jpg out.pdf

They will be added one by one to the generated PDF file.

If there are many files and they have a common prefix, then you can use the wildcard character * to add several files at once:

convert PL48536179* out.pdf

Or like this:

convert PL*.jpg out.pdf

The following command will create a PDF file from all JPG files in the current directory:

convert *.jpg out.pdf

By default, a PDF file is created in the highest quality. If you want to reduce the size of the output file, then specify the -quality option with a value less than 100, for example:

convert -quality 70 PL*.jpg out2.pdf

As you can see, the size of the PDF file has indeed decreased:

Online JPG to PDF conversion service

If you are a Windows user, or you do not want to install new utilities and deal with the command line to convert JPG to PDF, then you can collect JPG files into one PDF document on the page of the Online service for converting JPG to PDF: https://suip.biz/?act=convert-jpg-to-pdf

Brief instructions for use are given there, the main point is that if there are several files, then they must be placed in a ZIP archive before uploading.

Loading...
X