source: introduction/important/building-notes.xml@ a9469d14

12.2 trunk
Last change on this file since a9469d14 was a9469d14, checked in by Xi Ruoyao <xry111@…>, 3 weeks ago

building-notes: Mention GCC 14 -fhardened and explain the hardening options implied by it

  • Property mode set to 100644
File size: 62.0 KB
Line 
1<?xml version="1.0" encoding="UTF-8"?>
2<!DOCTYPE sect1 PUBLIC "-//OASIS//DTD DocBook XML V4.5//EN"
3 "http://www.oasis-open.org/docbook/xml/4.5/docbookx.dtd" [
4 <!ENTITY % general-entities SYSTEM "../../general.ent">
5 %general-entities;
6]>
7
8<sect1 id="unpacking">
9 <?dbhtml filename="notes-on-building.html"?>
10
11
12 <title>Notes on Building Software</title>
13
14 <para>Those people who have built an LFS system may be aware
15 of the general principles of downloading and unpacking software. Some
16 of that information is repeated here for those new to building
17 their own software.</para>
18
19 <para>Each set of installation instructions contains a URL from which you
20 can download the package. The patches; however, are stored on the LFS
21 servers and are available via HTTP. These are referenced as needed in the
22 installation instructions.</para>
23
24 <para>While you can keep the source files anywhere you like, we assume that
25 you have unpacked the package and changed into the directory created by the
26 unpacking process (the source directory). We also assume you have
27 uncompressed any required patches and they are in the directory
28 immediately above the source directory.</para>
29
30 <para>We can not emphasize strongly enough that you should start from a
31 <emphasis>clean source tree</emphasis> each time. This means that if
32 you have had an error during configuration or compilation, it's usually
33 best to delete the source tree and
34 re-unpack it <emphasis>before</emphasis> trying again. This obviously
35 doesn't apply if you're an advanced user used to hacking
36 <filename>Makefile</filename>s and C code, but if in doubt, start from a
37 clean tree.</para>
38
39 <sect2>
40 <title>Building Software as an Unprivileged (non-root) User</title>
41
42 <para>The golden rule of Unix System Administration is to use your
43 superpowers only when necessary. Hence, BLFS recommends that you
44 build software as an unprivileged user and only become the
45 <systemitem class='username'>root</systemitem> user when installing the
46 software. This philosophy is followed in all the packages in this book.
47 Unless otherwise specified, all instructions should be executed as an
48 unprivileged user. The book will advise you on instructions that need
49 <systemitem class='username'>root</systemitem> privileges.</para>
50
51 </sect2>
52
53 <sect2>
54 <title>Unpacking the Software</title>
55
56 <para>If a file is in <filename class='extension'>.tar</filename> format
57 and compressed, it is unpacked by running one of the following
58 commands:</para>
59
60<screen><userinput>tar -xvf filename.tar.gz
61tar -xvf filename.tgz
62tar -xvf filename.tar.Z
63tar -xvf filename.tar.bz2</userinput></screen>
64
65 <note>
66 <para>You may omit using the <option>v</option> parameter in the commands
67 shown above and below if you wish to suppress the verbose listing of all
68 the files in the archive as they are extracted. This can help speed up the
69 extraction as well as make any errors produced during the extraction
70 more obvious to you.</para>
71 </note>
72
73 <para>You can also use a slightly different method:</para>
74
75<screen><userinput>bzcat filename.tar.bz2 | tar -xv</userinput></screen>
76
77 <para>
78 Finally, sometimes we have a compressed patch file in
79 <filename class='extension'>.patch.gz</filename> or
80 <filename class='extension'>.patch.bz2</filename> format.
81 The best way to apply the patch is piping the output of the
82 decompressor to the <command>patch</command> utility. For example:
83 </para>
84
85 <screen><userinput>gzip -cd ../patchname.patch.gz | patch -p1</userinput></screen>
86
87 <para>
88 Or for a patch compressed with <command>bzip2</command>:
89 </para>
90
91 <screen><userinput>bzcat ../patchname.patch.bz2 | patch -p1</userinput></screen>
92
93 </sect2>
94
95 <sect2>
96 <title>Verifying File Integrity</title>
97
98 <para>Generally, to verify that the downloaded file is complete,
99 many package maintainers also distribute md5sums of the files. To verify the
100 md5sum of the downloaded files, download both the file and the
101 corresponding md5sum file to the same directory (preferably from different
102 on-line locations), and (assuming <filename>file.md5sum</filename> is the
103 md5sum file downloaded) run the following command:</para>
104
105<screen><userinput>md5sum -c file.md5sum</userinput></screen>
106
107 <para>If there are any errors, they will be reported. Note that the BLFS
108 book includes md5sums for all the source files also. To use the BLFS
109 supplied md5sums, you can create a <filename>file.md5sum</filename> (place
110 the md5sum data and the exact name of the downloaded file on the same
111 line of a file, separated by white space) and run the command shown above.
112 Alternately, simply run the command shown below and compare the output
113 to the md5sum data shown in the BLFS book.</para>
114
115<screen><userinput>md5sum <replaceable>&lt;name_of_downloaded_file&gt;</replaceable></userinput></screen>
116
117 <para>MD5 is not cryptographically secure, so the md5sums are only
118 provided for detecting unmalicious changes to the file content. For
119 example, an error or truncation introduced during network transfer, or
120 a <quote>stealth</quote> update to the package from the upstream
121 (updating the content of a released tarball instead of making a new
122 release properly).</para>
123
124 <para>There is no <quote>100%</quote> secure way to make
125 sure the genuity of the source files. Assuming the upstream is managing
126 their website correctly (the private key is not leaked and the domain is
127 not hijacked), and the trust anchors have been set up correctly using
128 <xref linkend="make-ca"/> on the BLFS system, we can reasonably trust
129 download URLs to the upstream official website
130 <emphasis role="bold">with https protocol</emphasis>. Note that
131 BLFS book itself is published on a website with https, so you should
132 already have some confidence in https protocol or you wouldn't trust the
133 book content.</para>
134
135 <para>If the package is downloaded from an unofficial location (for
136 example a local mirror), checksums generated by cryptographically secure
137 digest algorithms (for example SHA256) can be used to verify the
138 genuity of the package. Download the checksum file from the upstream
139 <emphasis role="bold">official</emphasis> website (or somewhere
140 <emphasis role="bold">you can trust</emphasis>) and compare the
141 checksum of the package from unofficial location with it. For example,
142 SHA256 checksum can be checked with the command:</para>
143
144 <note>
145 <para>If the checksum and the package are downloaded from the same
146 untrusted location, you won't gain security enhancement by verifying
147 the package with the checksum. The attacker can fake the checksum as
148 well as compromising the package itself.</para>
149 </note>
150
151<screen><userinput>sha256sum -c <replaceable>file</replaceable>.sha256sum</userinput></screen>
152
153 <para>If <xref linkend="gnupg2"/> is installed, you can also verify the
154 genuity of the package with a GPG signature. Import the upstream GPG
155 public key with:</para>
156
157<screen><userinput>gpg --recv-key <replaceable>keyID</replaceable></userinput></screen>
158
159 <para><replaceable>keyID</replaceable> should be replaced with the key ID
160 from somewhere <emphasis role="bold">you can trust</emphasis> (for
161 example, copy it from the upstream official website using https). Now
162 you can verify the signature with:</para>
163
164<screen><userinput>gpg --recv-key <replaceable>file</replaceable>.sig <replaceable>file</replaceable></userinput></screen>
165
166 <para>The advantage of <application>GnuPG</application> signature is,
167 once you imported a public key which can be trusted, you can download
168 both the package and its signature from the same unofficial location and
169 verify them with the public key. So you won't need to connect to the
170 official upstream website to retrieve a checksum for each new release.
171 You only need to update the public key if it's expired or revoked.
172 </para>
173
174 </sect2>
175
176 <sect2>
177 <title>Creating Log Files During Installation</title>
178
179 <para>For larger packages, it is convenient to create log files instead of
180 staring at the screen hoping to catch a particular error or warning. Log
181 files are also useful for debugging and keeping records. The following
182 command allows you to create an installation log. Replace
183 <replaceable>&lt;command&gt;</replaceable> with the command you intend to execute.</para>
184
185<screen><userinput>( <replaceable>&lt;command&gt;</replaceable> 2&gt;&amp;1 | tee compile.log &amp;&amp; exit $PIPESTATUS )</userinput></screen>
186
187 <para><option>2&gt;&amp;1</option> redirects error messages to the same
188 location as standard output. The <command>tee</command> command allows
189 viewing of the output while logging the results to a file. The parentheses
190 around the command run the entire command in a subshell and finally the
191 <command>exit $PIPESTATUS</command> command ensures the result of the
192 <replaceable>&lt;command&gt;</replaceable> is returned as the result and not the
193 result of the <command>tee</command> command.</para>
194
195 </sect2>
196
197 <sect2 id="parallel-builds" xreflabel="Using Multiple Processors">
198 <title>Using Multiple Processors</title>
199
200 <para>For many modern systems with multiple processors (or cores) the
201 compilation time for a package can be reduced by performing a "parallel
202 make" by either setting an environment variable or telling the make program
203 to simultaneously execute multiple jobs.</para>
204
205 <para>For instance, an Intel Core i9-13900K CPU contains 8 performance
206 (P) cores and 16 efficiency (E) cores, and the P cores support SMT
207 (Simultaneous MultiThreading, also known as
208 <quote>Hyper-Threading</quote>) so each P core can run two threads
209 simultaneously and the Linux kernel will treat each P core as two
210 logical cores. As a result, there are 32 logical cores in total.
211 To utilize all these logical cores running <command>make</command>, we
212 can set an environment variable to tell <command>make</command> to
213 run 32 jobs simultaneously:</para>
214
215 <screen><userinput>export MAKEFLAGS='-j32'</userinput></screen>
216
217 <para>or just building with:</para>
218
219 <screen><userinput>make -j32</userinput></screen>
220
221 <para>
222 If you have applied the optional <command>sed</command> when building
223 <application>ninja</application> in LFS, you can use:
224 </para>
225
226 <screen><userinput>export NINJAJOBS=32</userinput></screen>
227
228 <para>
229 when a package uses <command>ninja</command>, or just:
230 </para>
231
232 <screen><userinput>ninja -j32</userinput></screen>
233
234 <para>
235 If you are not sure about the number of logical cores, run the
236 <command>nproc</command> command.
237 </para>
238
239 <para>
240 For <command>make</command>, the default number of jobs is 1. But
241 for <command>ninja</command>, the default number of jobs is N + 2 if
242 the number of logical cores N is greater than 2; or N + 1 if
243 N is 1 or 2. The reason to use a number of jobs slightly greater
244 than the number of logical cores is keeping all logical
245 processors busy even if some jobs are performing I/O operations.
246 </para>
247
248 <para>
249 Note that the <option>-j</option> switches only limits the parallel
250 jobs started by <command>make</command> or <command>ninja</command>,
251 but each job may still spawn its own processes or threads. For
252 example, <command>ld.gold</command> will use multiple threads for
253 linking, and some tests of packages can spawn multiple threads for
254 testing thread safety properties. There is no generic way for the
255 building system to know the number of processes or threads spawned by
256 a job. So generally we should not consider the value passed with
257 <option>-j</option> a hard limit of the number of logical cores to
258 use. Read <xref linkend='build-in-cgroup'/> if you want to set such
259 a hard limit.
260 </para>
261
262 <para>Generally the number of processes should not exceed the number of
263 cores supported by the CPU too much. To list the processors on your
264 system, issue: <userinput>grep processor /proc/cpuinfo</userinput>.
265 </para>
266
267 <para>In some cases, using multiple processes may result in a race
268 condition where the success of the build depends on the order of the
269 commands run by the <command>make</command> program. For instance, if an
270 executable needs File A and File B, attempting to link the program before
271 one of the dependent components is available will result in a failure.
272 This condition usually arises because the upstream developer has not
273 properly designated all the prerequisites needed to accomplish a step in the
274 Makefile.</para>
275
276 <para>If this occurs, the best way to proceed is to drop back to a
277 single processor build. Adding <option>-j1</option> to a make command
278 will override the similar setting in the <envar>MAKEFLAGS</envar>
279 environment variable.</para>
280
281 <important>
282 <para>
283 Another problem may occur with modern CPU's, which have a lot of cores.
284 Each job started consumes memory, and if the sum of the needed
285 memory for each job exceeds the available memory, you may encounter
286 either an OOM (Out of Memory) kernel interrupt or intense swapping
287 that will slow the build beyond reasonable limits.
288 </para>
289
290 <para>
291 Some compilations with <command>g++</command> may consume up to 2.5 GB
292 of memory, so to be safe, you should restrict the number of jobs
293 to (Total Memory in GB)/2.5, at least for big packages such as LLVM,
294 WebKitGtk, QtWebEngine, or libreoffice.
295 </para>
296 </important>
297 </sect2>
298
299 <sect2 id="build-in-cgroup">
300 <title>Use Linux Control Group to Limit the Resource Usage</title>
301
302 <para>
303 Sometimes we want to limit the resource usage when we build a
304 package. For example, when we have 8 logical cores, we may want
305 to use only 6 cores for building the package and reserve another
306 2 cores for playing a movie. The Linux kernel provides a feature
307 called control groups (cgroup) for such a need.
308 </para>
309
310 <para>
311 Enable control group in the kernel configuration, then rebuild the
312 kernel and reboot if necessary:
313 </para>
314
315 <xi:include xmlns:xi="http://www.w3.org/2001/XInclude"
316 href="cgroup-kernel.xml"/>
317
318 <!-- We need cgroup2 mounted at /sys/fs/cgroup. It's done by
319 systemd itself in LFS systemd, mountvirtfs script in LFS sysv. -->
320
321 <para revision='systemd'>
322 Ensure <xref linkend='systemd'/> and <xref linkend='shadow'/> have
323 been rebuilt with <xref linkend='linux-pam'/> support (if you are
324 interacting via a SSH or graphical session, also ensure the
325 <xref linkend='openssh'/> server or the desktop manager has been
326 built with <xref linkend='linux-pam'/>). As the &root; user, create
327 a configuration file to allow resource control without &root;
328 privilege, and instruct <command>systemd</command> to reload the
329 configuration:
330 </para>
331
332 <screen revision="systemd" role="nodump"><userinput>mkdir -pv /etc/systemd/system/user@.service.d &amp;&amp;
333cat &gt; /etc/systemd/system/user@.service.d/delegate.conf &lt;&lt; EOF &amp;&amp;
334<literal>[Service]
335Delegate=memory cpuset</literal>
336EOF
337systemctl daemon-reload</userinput></screen>
338
339 <para revision='systemd'>
340 Then logout and login again. Now to run <command>make -j5</command>
341 with the first 4 logical cores and 8 GB of system memory, issue:
342 </para>
343
344 <screen revision="systemd" role="nodump"><userinput>systemctl --user start dbus &amp;&amp;
345systemd-run --user --pty --pipe --wait -G -d \
346 -p MemoryHigh=8G \
347 -p AllowedCPUs=0-3 \
348 make -j5</userinput></screen>
349
350 <para revision='sysv'>
351 Ensure <xref linkend='sudo'/> is installed. To run
352 <command>make -j5</command> with the first 4 logical cores and 8 GB
353 of system memory, issue:
354 </para>
355
356 <!-- "\EOF" because we expect $$ to be expanded by the "bash -e"
357 shell, not the current shell.
358
359 TODO: can we use elogind to delegate the controllers (like
360 systemd) to avoid relying on sudo? -->
361 <screen revision="sysv" role="nodump"><userinput>bash -e &lt;&lt; \EOF
362 sudo mkdir /sys/fs/cgroup/$$
363 sudo sh -c \
364 "echo +memory +cpuset > /sys/fs/cgroup/cgroup.subtree_control"
365 sudo sh -c \
366 "echo 0-3 > /sys/fs/cgroup/$$/cpuset.cpus"
367 sudo sh -c \
368 "echo $(bc -e '8*2^30') > /sys/fs/cgroup/$$/memory.high"
369 (
370 sudo sh -c "echo $BASHPID > /sys/fs/cgroup/$$/cgroup.procs"
371 exec make -j5
372 )
373 sudo rmdir /sys/fs/cgroup/$$
374EOF</userinput></screen>
375
376 <para>
377 With
378 <phrase revision='systemd'>
379 <parameter>MemoryHigh=8G</parameter>
380 </phrase>
381 <phrase revision='sysv'>
382 <literal>8589934592</literal> (the output of
383 <userinput>bc -e '8*2^30'</userinput>, 2^30 represents
384 2<superscript>30</superscript>, i.e. a Gigabyte) in the
385 <filename>memory.high</filename> entry
386 </phrase>, a soft limit of memory usage is set.
387 If the processes in the cgroup (<command>make</command> and all the
388 descendants of it) uses more than 8 GB of system memory in total,
389 the kernel will throttle down the processes and try to reclaim the
390 system memory from them. But they can still use more than 8 GB of
391 system memory. If you want to make a hard limit instead, replace
392 <phrase revision='systemd'>
393 <parameter>MemoryHigh</parameter> with
394 <parameter>MemoryMax</parameter>.
395 </phrase>
396 <phrase revision='sysv'>
397 <filename>memory.high</filename> with
398 <filename>memory.max</filename>.
399 </phrase>
400 But doing so will cause the processes killed if 8 GB is not enough
401 for them.
402 </para>
403
404 <para>
405 <phrase revision='systemd'>
406 <parameter>AllowedCPUs=0-3</parameter>
407 </phrase>
408 <phrase revision='sysv'>
409 <literal>0-3</literal> in the <filename>cpuset.cpus</filename>
410 entry
411 </phrase> makes the kernel only run the processes in the cgroup on
412 the logical cores with numbers 0, 1, 2, or 3. You may need to
413 adjust this setting based the mapping between the logical cores and the
414 physical cores. For example, with an Intel Core i9-13900K CPU,
415 the logical cores 0, 2, 4, ..., 14 are mapped to the first threads of
416 the eight physical P cores, the logical cores 1, 3, 5, ..., 15 are
417 mapped to the second threads of the physical P cores, and the logical
418 cores 16, 17, ..., 31 are mapped to the 16 physical E cores. So if
419 we want to use four threads from four different P cores, we need to
420 specify <literal>0,2,4,6</literal> instead of <literal>0-3</literal>.
421 Note that the other CPU models may use a different mapping scheme.
422 If you are not sure about the mapping between the logical cores
423 and the physical cores, run the <command>lscpu --extended</command>
424 command which will output logical core IDs in the
425 <computeroutput>CPU</computeroutput> column, and physical core
426 IDs in the <computeroutput>CORE</computeroutput> column.
427 </para>
428
429 <para>
430 When the <command>nproc</command> or <command>ninja</command> command
431 runs in a cgroup, it will use the number of logical cores assigned to
432 the cgroup as the <quote>system logical core count.</quote> For
433 example, in a cgroup with logical cores 0-3 assigned,
434 <command>nproc</command> will print
435 <computeroutput>4</computeroutput>, and <command>ninja</command>
436 will run 6 (4 + 2) jobs simultaneously if no <option>-j</option>
437 setting is explicitly given.
438 </para>
439
440 <para revision="systemd">
441 Read the man pages <ulink role='man'
442 url='&man;systemd-run.1'>systemd-run(1)</ulink> and
443 <ulink role='man'
444 url='&man;systemd.resource-control.5'>systemd.resource-control(5)</ulink>
445 for the detailed explanation of parameters in the command.
446 </para>
447
448 <para revision="sysv">
449 Read the <filename>Documentation/admin-guide/cgroup-v2.rst</filename>
450 file in the Linux kernel source tree for the detailed explanation of
451 <systemitem class="filesystem">cgroup2</systemitem> pseudo file
452 system entries referred in the command.
453 </para>
454
455 </sect2>
456
457 <sect2 id="automating-builds" xreflabel="Automated Building Procedures">
458 <title>Automated Building Procedures</title>
459
460 <para>There are times when automating the building of a package can come in
461 handy. Everyone has their own reasons for wanting to automate building,
462 and everyone goes about it in their own way. Creating
463 <filename>Makefile</filename>s, <application>Bash</application> scripts,
464 <application>Perl</application> scripts or simply a list of commands used
465 to cut and paste are just some of the methods you can use to automate
466 building BLFS packages. Detailing how and providing examples of the many
467 ways you can automate the building of packages is beyond the scope of this
468 section. This section will expose you to using file redirection and the
469 <command>yes</command> command to help provide ideas on how to automate
470 your builds.</para>
471
472 <bridgehead renderas="sect3">File Redirection to Automate Input</bridgehead>
473
474 <para>You will find times throughout your BLFS journey when you will come
475 across a package that has a command prompting you for information. This
476 information might be configuration details, a directory path, or a response
477 to a license agreement. This can present a challenge to automate the
478 building of that package. Occasionally, you will be prompted for different
479 information in a series of questions. One method to automate this type of
480 scenario requires putting the desired responses in a file and using
481 redirection so that the program uses the data in the file as the answers to
482 the questions.</para>
483<!-- outdated
484 <para>Building the <application>CUPS</application> package is a good
485 example of how redirecting a file as input to prompts can help you automate
486 the build. If you run the test suite, you are asked to respond to a series
487 of questions regarding the type of test to run and if you have any
488 auxiliary programs the test can use. You can create a file with your
489 responses, one response per line, and use a command similar to the
490 one shown below to automate running the test suite:</para>
491
492<screen><userinput>make check &lt; ../cups-1.1.23-testsuite_parms</userinput></screen>
493-->
494 <para>This effectively makes the test suite use the responses in the file
495 as the input to the questions. Occasionally you may end up doing a bit of
496 trial and error determining the exact format of your input file for some
497 things, but once figured out and documented you can use this to automate
498 building the package.</para>
499
500 <bridgehead renderas="sect3">Using <command>yes</command> to Automate
501 Input</bridgehead>
502
503 <para>Sometimes you will only need to provide one response, or provide the
504 same response to many prompts. For these instances, the
505 <command>yes</command> command works really well. The
506 <command>yes</command> command can be used to provide a response (the same
507 one) to one or more instances of questions. It can be used to simulate
508 pressing just the <keycap>Enter</keycap> key, entering the
509 <keycap>Y</keycap> key or entering a string of text. Perhaps the easiest
510 way to show its use is in an example.</para>
511
512 <para>First, create a short <application>Bash</application> script by
513 entering the following commands:</para>
514
515<screen><userinput>cat &gt; blfs-yes-test1 &lt;&lt; "EOF"
516<literal>#!/bin/bash
517
518echo -n -e "\n\nPlease type something (or nothing) and press Enter ---> "
519
520read A_STRING
521
522if test "$A_STRING" = ""; then A_STRING="Just the Enter key was pressed"
523else A_STRING="You entered '$A_STRING'"
524fi
525
526echo -e "\n\n$A_STRING\n\n"</literal>
527EOF
528chmod 755 blfs-yes-test1</userinput></screen>
529
530 <para>Now run the script by issuing <command>./blfs-yes-test1</command> from
531 the command line. It will wait for a response, which can be anything (or
532 nothing) followed by the <keycap>Enter</keycap> key. After entering
533 something, the result will be echoed to the screen. Now use the
534 <command>yes</command> command to automate the entering of a
535 response:</para>
536
537<screen><userinput>yes | ./blfs-yes-test1</userinput></screen>
538
539 <para>Notice that piping <command>yes</command> by itself to the script
540 results in <keycap>y</keycap> being passed to the script. Now try it with a
541 string of text:</para>
542
543<screen><userinput>yes 'This is some text' | ./blfs-yes-test1</userinput></screen>
544
545 <para>The exact string was used as the response to the script. Finally,
546 try it using an empty (null) string:</para>
547
548<screen><userinput>yes '' | ./blfs-yes-test1</userinput></screen>
549
550 <para>Notice this results in passing just the press of the
551 <keycap>Enter</keycap> key to the script. This is useful for times when the
552 default answer to the prompt is sufficient. This syntax is used in the
553 <xref linkend="net-tools-automate-example"/> instructions to accept all the
554 defaults to the many prompts during the configuration step. You may now
555 remove the test script, if desired.</para>
556
557 <bridgehead renderas="sect3">File Redirection to Automate Output</bridgehead>
558
559 <para>In order to automate the building of some packages, especially those
560 that require you to read a license agreement one page at a time, requires
561 using a method that avoids having to press a key to display each page.
562 Redirecting the output to a file can be used in these instances to assist
563 with the automation. The previous section on this page touched on creating
564 log files of the build output. The redirection method shown there used the
565 <command>tee</command> command to redirect output to a file while also
566 displaying the output to the screen. Here, the output will only be sent to
567 a file.</para>
568
569 <para>Again, the easiest way to demonstrate the technique is to show an
570 example. First, issue the command:</para>
571
572<screen><userinput>ls -l /usr/bin | less</userinput></screen>
573
574 <para>Of course, you'll be required to view the output one page at a time
575 because the <command>less</command> filter was used. Now try the same
576 command, but this time redirect the output to a file. The special file
577 <filename>/dev/null</filename> can be used instead of the filename shown,
578 but you will have no log file to examine:</para>
579
580<screen><userinput>ls -l /usr/bin | less &gt; redirect_test.log 2&gt;&amp;1</userinput></screen>
581
582 <para>Notice that this time the command immediately returned to the shell
583 prompt without having to page through the output. You may now remove the
584 log file.</para>
585
586 <para>The last example will use the <command>yes</command> command in
587 combination with output redirection to bypass having to page through the
588 output and then provide a <keycap>y</keycap> to a prompt. This technique
589 could be used in instances when otherwise you would have to page through
590 the output of a file (such as a license agreement) and then answer the
591 question of <computeroutput>do you accept the above?</computeroutput>.
592 For this example,
593 another short <application>Bash</application> script is required:</para>
594
595<screen><userinput>cat &gt; blfs-yes-test2 &lt;&lt; "EOF"
596<literal>#!/bin/bash
597
598ls -l /usr/bin | less
599
600echo -n -e "\n\nDid you enjoy reading this? (y,n) "
601
602read A_STRING
603
604if test "$A_STRING" = "y"; then A_STRING="You entered the 'y' key"
605else A_STRING="You did NOT enter the 'y' key"
606fi
607
608echo -e "\n\n$A_STRING\n\n"</literal>
609EOF
610chmod 755 blfs-yes-test2</userinput></screen>
611
612 <para>This script can be used to simulate a program that requires you to
613 read a license agreement, then respond appropriately to accept the
614 agreement before the program will install anything. First, run the script
615 without any automation techniques by issuing
616 <command>./blfs-yes-test2</command>.</para>
617
618 <para>Now issue the following command which uses two automation techniques,
619 making it suitable for use in an automated build script:</para>
620
621<screen><userinput>yes | ./blfs-yes-test2 &gt; blfs-yes-test2.log 2&gt;&amp;1</userinput></screen>
622
623 <para>If desired, issue <command>tail blfs-yes-test2.log</command> to see
624 the end of the paged output, and confirmation that <keycap>y</keycap> was
625 passed through to the script. Once satisfied that it works as it should,
626 you may remove the script and log file.</para>
627
628 <para>Finally, keep in mind that there are many ways to automate and/or
629 script the build commands. There is not a single <quote>correct</quote> way
630 to do it. Your imagination is the only limit.</para>
631
632 </sect2>
633
634 <sect2>
635 <title>Dependencies</title>
636
637 <para>For each package described, BLFS lists the known dependencies.
638 These are listed under several headings, whose meaning is as follows:</para>
639
640 <itemizedlist>
641 <listitem>
642 <para><emphasis>Required</emphasis> means that the target package
643 cannot be correctly built without the dependency having first been
644 installed, except if the dependency is said to be
645 <quote>runtime</quote> which means the target package can be built but
646 cannot function without it.</para>
647 <para>
648 Note that a target package can start to <quote>function</quote>
649 in many subtle ways: an installed configuration file can make the
650 init system, cron daemon, or bus daemon to run a program
651 automatically; another package using the target package as a
652 dependency can run a program from the target package in the
653 building system; and the configuration sections in the BLFS book
654 may also run a program from a just installed package. So if
655 you are installing the target package without a
656 <emphasis>Required (runtime)</emphasis> dependency installed,
657 You should install the dependency as soon as possible after the
658 installation of the target package.
659 </para>
660 </listitem>
661 <listitem>
662 <para><emphasis>Recommended</emphasis> means that BLFS strongly
663 suggests this package is installed first (except if said to be
664 <quote>runtime,</quote> see below) for a clean and trouble-free
665 build, that won't have issues either during the build process, or at
666 run-time. The instructions in the book assume these packages are
667 installed. Some changes or workarounds may be required if these
668 packages are not installed. If a recommended dependency is said
669 to be <quote>runtime,</quote> it means that BLFS strongly suggests
670 that this dependency is installed before using the package, for
671 getting full functionality.</para>
672 </listitem>
673 <listitem>
674 <para><emphasis>Optional</emphasis> means that this package might be
675 installed for added functionality. Often BLFS will describe the
676 dependency to explain the added functionality that will result.
677 Some optional dependencies are automatically picked up by the target
678 package if the dependency is installed, while others
679 also need additional configuration options to be enabled
680 when the target package is built. Such additional options are
681 often documented in the BLFS book. If an optional dependency is
682 said to be <quote>runtime,</quote> it means you may install
683 the dependency after installing the target package to support some
684 optional features of the target package if you need these
685 features.</para>
686 <para>An optional dependency may be out of BLFS. If you need such
687 an <emphasis>external</emphasis> optional dependency for some
688 features you need, read <xref linkend='beyond'/> for the general
689 hint about installing an out-of-BLFS package.</para>
690 </listitem>
691 </itemizedlist>
692
693 </sect2>
694
695 <sect2 id="package_updates">
696 <title>Using the Most Current Package Sources</title>
697
698 <para>On occasion you may run into a situation in the book when a package
699 will not build or work properly. Though the Editors attempt to ensure
700 that every package in the book builds and works properly, sometimes a
701 package has been overlooked or was not tested with this particular version
702 of BLFS.</para>
703
704 <para>If you discover that a package will not build or work properly, you
705 should see if there is a more current version of the package. Typically
706 this means you go to the maintainer's web site and download the most current
707 tarball and attempt to build the package. If you cannot determine the
708 maintainer's web site by looking at the download URLs, use Google and query
709 the package's name. For example, in the Google search bar type:
710 'package_name download' (omit the quotes) or something similar. Sometimes
711 typing: 'package_name home page' will result in you finding the
712 maintainer's web site.</para>
713
714 </sect2>
715
716 <sect2 id="stripping">
717 <title>Stripping One More Time</title>
718
719 <para>
720 In LFS, stripping of debugging symbols and unneeded symbol table
721 entries was discussed a couple of times. When building BLFS packages,
722 there are generally no special instructions that discuss stripping
723 again. Stripping can be done while installing a package, or
724 afterwards.
725 </para>
726
727 <bridgehead renderas="sect3" id="stripping-install">Stripping while Installing a Package</bridgehead>
728
729 <para>
730 There are several ways to strip executables installed by a
731 package. They depend on the build system used (see below <link
732 linkend="buildsystems">the section about build systems</link>),
733 so only some
734 generalities can be listed here:
735 </para>
736
737 <note>
738 <para>
739 The following methods using the feature of a building system
740 (autotools, meson, or cmake) will not strip static libraries if any
741 is installed. Fortunately there are not too many static libraries
742 in BLFS, and a static library can always be stripped safely by
743 running <command>strip --strip-unneeded</command> on it manually.
744 </para>
745 </note>
746
747 <itemizedlist>
748 <listitem>
749 <para>
750 The packages using autotools usually have an
751 <parameter>install-strip</parameter> target in their generated
752 <filename>Makefile</filename> files. So installing stripped
753 executables is just a matter of using
754 <command>make install-strip</command> instead of
755 <command>make install</command>.
756 </para>
757 </listitem>
758 <listitem>
759 <para>
760 The packages using the meson build system can accept
761 <parameter>-D strip=true</parameter> when running
762 <command>meson</command>. If you've forgot to add this option
763 running the <command>meson</command>, you can also run
764 <command>meson install --strip</command> instead of
765 <command>ninja install</command>.
766 </para>
767 </listitem>
768 <listitem>
769 <para>
770 <command>cmake</command> generates
771 <parameter>install/strip</parameter> targets for both the
772 <parameter>Unix Makefiles</parameter> and
773 <parameter>Ninja</parameter> generators (the default is
774 <parameter>Unix Makefiles</parameter> on linux). So just run
775 <command>make install/strip</command> or
776 <command>ninja install/strip</command> instead of the
777 <command>install</command> counterparts.
778 </para>
779 </listitem>
780 <listitem>
781 <para>
782 Removing (or not generating) debug symbols can also be
783 achieved by removing the
784 <parameter>-g&lt;something&gt;</parameter> options
785 in C/C++ calls. How to do that is very specific for each
786 package. And, it does not remove unneeded symbol table entries.
787 So it will not be explained in detail here. See also below
788 the paragraphs about optimization.
789 </para>
790 </listitem>
791 </itemizedlist>
792
793 <bridgehead renderas="sect3" id="stripping-installed">Stripping Installed Executables</bridgehead>
794
795 <para>
796 The <command>strip</command> utility changes files in place, which may
797 break anything using it if it is loaded in memory. Note that if a file is
798 in use but just removed from the disk (i.e. not overwritten nor
799 modified), this is not a problem since the kernel can use
800 <quote>deleted</quote> files. Look at <filename>/proc/*/maps</filename>
801 and it is likely that you'll see some <emphasis>(deleted)</emphasis>
802 entries. The <command>mv</command> just removes the destination file from
803 the directory but does not touch its content, so that it satisfies the
804 condition for the kernel to use the old (deleted) file.
805 But this approach can detach hard links into duplicated copies,
806 causing a bloat which is obviously unwanted as we are stripping to
807 reduce system size. If two files in a same file system share the
808 same inode number, they are hard links to each other and we should
809 reconstruct the link. The script below is just an example.
810 It should be run as the &root; user:
811 </para>
812
813<screen><userinput>cat &gt; /usr/sbin/strip-all.sh &lt;&lt; "EOF"
814<literal>#!/usr/bin/bash
815
816if [ $EUID -ne 0 ]; then
817 echo "Need to be root"
818 exit 1
819fi
820
821last_fs_inode=
822last_file=
823
824{ find /usr/lib -type f -name '*.so*' ! -name '*dbg'
825 find /usr/lib -type f -name '*.a'
826 find /usr/{bin,sbin,libexec} -type f
827} | xargs stat -c '%m %i %n' | sort | while read fs inode file; do
828 if ! readelf -h $file >/dev/null 2>&amp;1; then continue; fi
829 if file $file | grep --quiet --invert-match 'not stripped'; then continue; fi
830
831 if [ "$fs $inode" = "$last_fs_inode" ]; then
832 ln -f $last_file $file;
833 continue;
834 fi
835
836 cp --preserve $file ${file}.tmp
837 strip --strip-unneeded ${file}.tmp
838 mv ${file}.tmp $file
839
840 last_fs_inode="$fs $inode"
841 last_file=$file
842done</literal>
843EOF
844chmod 744 /usr/sbin/strip-all.sh</userinput></screen>
845
846 <para>
847 If you install programs in other directories such as <filename
848 class="directory">/opt</filename> or <filename
849 class="directory">/usr/local</filename>, you may want to strip the files
850 there too. Just add other directories to scan in the compound list of
851 <command>find</command> commands between the braces.
852 </para>
853
854 <para>
855 For more information on stripping, see <ulink
856 url="https://www.technovelty.org/linux/stripping-shared-libraries.html"/>.
857 </para>
858
859 </sect2>
860
861<!--
862 <sect2 id="libtool">
863 <title>Libtool files</title>
864
865 <para>
866 One of the side effects of packages that use Autotools, including
867 libtool, is that they create many files with an .la extension. These
868 files are not needed in an LFS environment. If there are conflicts with
869 pkgconfig entries, they can actually prevent successful builds. You
870 may want to consider removing these files periodically:
871 </para>
872
873<screen><userinput>find /lib /usr/lib -not -path "*Image*" -a -name \*.la -delete</userinput></screen>
874
875 <para>
876 The above command removes all .la files with the exception of those that
877 have <quote>Image</quote> or <quote>openldap</quote> as a part of the
878 path. These .la files are used by the ImageMagick and openldap programs,
879 respectively. There may be other exceptions by packages not in BLFS.
880 </para>
881
882 </sect2>
883-->
884 <sect2 id="buildsystems">
885 <title>Working with different build systems</title>
886
887 <para>
888 There are now three different build systems in common use for
889 converting C or C++ source code into compiled programs or
890 libraries and their details (particularly, finding out about available
891 options and their default values) differ. It may be easiest to understand
892 the issues caused by some choices (typically slow execution or
893 unexpected use of, or omission of, optimizations) by starting with
894 the <envar>CFLAGS</envar>, <envar>CXXFLAGS</envar>, and
895 <envar>LDFLAGS</envar> environment variables. There are also some
896 programs which use Rust.
897 </para>
898
899 <para>
900 Most LFS and BLFS builders are probably aware of the basics of
901 <envar>CFLAGS</envar> and <envar>CXXFLAGS</envar> for altering how a
902 program is compiled. Typically, some form of optimization is used by
903 upstream developers (<option>-O2</option> or <option>-O3</option>),
904 sometimes with the creation of debug symbols (<option>-g</option>),
905 as defaults.
906 </para>
907
908 <para>
909 If there are contradictory flags (e.g. multiple different
910 <option>-O</option> values),
911 the <emphasis>last</emphasis> value will be used. Sometimes this means
912 that flags specified in environment variables will be picked up before
913 values hardcoded in the Makefile, and therefore ignored. For example,
914 where a user specifies <option>-O2</option> and that is followed by
915 <option>-O3</option> the build will use <option>-O3</option>.
916 </para>
917
918 <para>
919 There are various other things which can be passed in CFLAGS or
920 CXXFLAGS, such as allowing using the instruction set extensions
921 available with a specific microarchitecture (e.g.
922 <option>-march=amdfam10</option> or <option>-march=native</option>),
923 tune the generated code for a specific microarchitecture (e. g.
924 <option>-mtune=tigerlake</option> or <option>-mtune=native</option>,
925 if <option>-mtune=</option> is not used, the microarchitecture from
926 <option>-march=</option> setting will be used), or specifying a
927 specific standard for C or C++ (<option>-std=c++17</option> for
928 example). But one thing which has now come to light is that
929 programmers might include debug assertions in their code, expecting
930 them to be disabled in releases by using <option>-D NDEBUG</option>.
931 Specifically, if <xref linkend="mesa"/> is built with these
932 assertions enabled, some activities such as loading levels of games
933 can take extremely long times, even on high-class video cards.
934 </para>
935
936 <bridgehead renderas="sect3" id="autotools-info">Autotools with Make</bridgehead>
937
938 <para>
939 This combination is often described as <quote>CMMI</quote>
940 (configure, make, make install) and is used here to also cover
941 the few packages which have a configure script that is not
942 generated by autotools.
943 </para>
944
945 <para>
946 Sometimes running <command>./configure --help</command> will produce
947 useful options about switches which might be used. At other times,
948 after looking at the output from configure you may need to look
949 at the details of the script to find out what it was actually searching
950 for.
951 </para>
952
953 <para>
954 Many configure scripts will pick up any CFLAGS or CXXFLAGS from the
955 environment, but CMMI packages vary about how these will be mixed with
956 any flags which would otherwise be used (<emphasis>variously</emphasis>:
957 ignored, used to replace the programmer's suggestion, used before the
958 programmer's suggestion, or used after the programmer's suggestion).
959 </para>
960
961 <para>
962 In most CMMI packages, running <command>make</command> will list
963 each command and run it, interspersed with any warnings. But some
964 packages try to be <quote>silent</quote> and only show which file
965 they are compiling or linking instead of showing the command line.
966 If you need to inspect the command, either because of an error, or
967 just to see what options and flags are being used, adding
968 <option>V=1</option> to the make invocation may help.
969 </para>
970
971 <bridgehead renderas="sect3" id="cmake-info">CMake</bridgehead>
972
973 <para>
974 CMake works in a very different way, and it has two backends which
975 can be used on BLFS: <command>make</command> and
976 <command>ninja</command>. The default backend is make, but
977 ninja can be faster on large packages with multiple processors. To
978 use ninja, specify <option>-G Ninja</option> in the cmake command.
979 However, there are some packages which create fatal errors in their
980 ninja files but build successfully using the default of Unix
981 Makefiles.
982 </para>
983
984 <para>
985 The hardest part of using CMake is knowing what options you might wish
986 to specify. The only way to get a list of what the package knows about
987 is to run <command>cmake -LAH</command> and look at the output for that
988 default configuration.
989 </para>
990
991 <para>
992 Perhaps the most-important thing about CMake is that it has a variety
993 of CMAKE_BUILD_TYPE values, and these affect the flags. The default
994 is that this is not set and no flags are generated. Any
995 <envar>CFLAGS</envar> or <envar>CXXFLAGS</envar> in the environment
996 will be used. If the programmer has coded any debug assertions,
997 those will be enabled unless -D NDEBUG is used. The following
998 CMAKE_BUILD_TYPE values will generate the flags shown, and these
999 will come <emphasis>after</emphasis> any flags in the environment
1000 and therefore take precedence.
1001 </para>
1002
1003 <informaltable align="center">
1004 <tgroup cols="2">
1005 <colspec colnum="1" align="center"/>
1006 <colspec colnum="2" align="center"/>
1007 <thead>
1008 <row><entry>Value</entry><entry>Flags</entry></row>
1009 </thead>
1010 <tbody>
1011 <row>
1012 <entry>Debug</entry><entry><option>-g</option></entry>
1013 </row>
1014 <row>
1015 <entry>Release</entry><entry><option>-O3 -D NDEBUG</option></entry>
1016 </row>
1017 <row>
1018 <entry>RelWithDebInfo</entry><entry><option>-O2 -g -D NDEBUG</option></entry>
1019 </row>
1020 <row>
1021 <entry>MinSizeRel</entry><entry><option>-Os -D NDEBUG</option></entry>
1022 </row>
1023 </tbody>
1024 </tgroup>
1025 </informaltable>
1026
1027 <para>
1028 CMake tries to produce quiet builds. To see the details of the commands
1029 which are being run, use <command>make VERBOSE=1</command> or
1030 <command>ninja -v</command>.
1031 </para>
1032
1033 <para>
1034 By default, CMake treats file installation differently from the other
1035 build systems: if a file already exists and is not newer than a file
1036 that would overwrite it, then the file is not installed. This may be
1037 a problem if a user wants to record which file belongs to a package,
1038 either using <envar>LD_PRELOAD</envar>, or by listing files newer
1039 than a timestamp. The default can be changed by setting the variable
1040 <envar>CMAKE_INSTALL_ALWAYS</envar> to 1 in the
1041 <emphasis>environment</emphasis>, for example by
1042 <command>export</command>'ing it.
1043 </para>
1044
1045 <bridgehead renderas="sect3" id="meson-info">Meson</bridgehead>
1046
1047 <para>
1048 Meson has some similarities to CMake, but many differences. To get
1049 details of the defines that you may wish to change you can look at
1050 <filename>meson_options.txt</filename> which is usually in the
1051 top-level directory.
1052 </para>
1053
1054 <para>
1055 If you have already configured the package by running
1056 <command>meson</command> and now wish to change one or more settings,
1057 you can either remove the build directory, recreate it, and use the
1058 altered options, or within the build directory run <command>meson
1059 configure</command>, e.g. to set an option:
1060 </para>
1061
1062<screen><userinput>meson configure -D &lt;some_option&gt;=true</userinput></screen>
1063
1064 <para>
1065 If you do that, the file <filename>meson-private/cmd_line.txt</filename>
1066 will show the <emphasis>last</emphasis> commands which were used.
1067 </para>
1068
1069 <para>
1070 Meson provides the following buildtype values, and the flags they enable
1071 come <emphasis>after</emphasis> any flags supplied in the environment and
1072 therefore take precedence.
1073 </para>
1074
1075 <itemizedlist>
1076 <listitem>
1077 <para>plain: no added flags. This is for distributors to supply their
1078 own <envar>CFLAGS</envar>, <envar>CXXFLAGS</envar> and
1079 <envar>LDFLAGS</envar>. There is no obvious reason to use
1080 this in BLFS.</para>
1081 </listitem>
1082 <listitem>
1083 <para>debug: <option>-g</option> - this is the default if
1084 nothing is specified in either <filename>meson.build</filename>
1085 or the command line. However it results large and slow binaries,
1086 so we should override it in BLFS.</para>
1087 </listitem>
1088 <listitem>
1089 <para>debugoptimized: <option>-O2 -g</option> - this is the
1090 default specified in <filename>meson.build</filename> of some
1091 packages.</para>
1092 </listitem>
1093 <listitem>
1094 <para>release: <option>-O3</option> (occasionally a package will
1095 force <option>-O2</option> here) - this is the buildtype we use
1096 for most packages with Meson build system in BLFS.</para>
1097 </listitem>
1098 </itemizedlist>
1099
1100 <!-- From https://mesonbuild.com/Builtin-options.html#core-options:
1101 b_ndebug: Default value = false, Possible values are
1102 true, false, if-release. Some packages sets it to if-release
1103 so we mistakenly believed if-release had been the default. -->
1104 <para>
1105 The <option>-D NDEBUG</option> flag is implied by the release
1106 buildtype for some packages (for example <xref linkend='mesa'/>).
1107 It can also be provided explicitly by passing
1108 <option>-D b_ndebug=true</option>.
1109 </para>
1110
1111 <para>
1112 To see the details of the commands which are being run in a package using
1113 meson, use <command>ninja -v</command>.
1114 </para>
1115
1116 <bridgehead renderas="sect3" id="rust-info">Rustc and Cargo</bridgehead>
1117
1118 <para>
1119 Most released rustc programs are provided as crates (source tarballs)
1120 which will query a server to check current versions of dependencies
1121 and then download them as necessary. These packages are built using
1122 <command>cargo --release</command>. In theory, you can manipulate the
1123 RUSTFLAGS to change the optimize-level (default for
1124 <option>--release</option> is 3, i. e.
1125 <option>-Copt-level=3</option>, like <option>-O3</option>) or to
1126 force it to build for the machine it is being compiled on, using
1127 <option>-Ctarget-cpu=native</option> but in practice this seems to
1128 make no significant difference.
1129 </para>
1130
1131 <para>
1132 If you are compiling a standalone Rust program (as an unpackaged
1133 <filename class='extension'>.rs</filename> file) by running
1134 <command>rustc</command> directly, you should specify
1135 <option>-O</option> (the abbreviation of
1136 <option>-Copt-level=2</option>) or <option>-Copt-level=3</option>
1137 otherwise it will do an unoptimized compile and run
1138 <emphasis>much</emphasis> slower. If you are compiling the program
1139 for debugging it, replace the <option>-O</option> or
1140 <option>-Copt-level=</option> options with <option>-g</option> to
1141 produce an unoptimized program with debug info.
1142 </para>
1143
1144 <para>
1145 Like <command>ninja</command>, by default <command>cargo</command>
1146 uses all logical cores. This can often be worked around,
1147 either by exporting
1148 <envar>CARGO_BUILD_JOBS=<replaceable>&lt;N&gt;</replaceable></envar>
1149 or passing
1150 <option>--jobs <replaceable>&lt;N&gt;</replaceable></option> to
1151 <command>cargo</command>.
1152 For compiling rustc itself, specifying
1153 <option>--jobs <replaceable>&lt;N&gt;</replaceable></option> for
1154 invocations of <command>x.py</command>
1155 (together with the <envar>CARGO_BUILD_JOBS</envar> environment
1156 variable, which looks like a <quote>belt and braces</quote>
1157 approach but seems to be necessary) mostly works. The exception is
1158 running the tests when building rustc, some of them will
1159 nevertheless use all online CPUs, at least as of rustc-1.42.0.
1160 </para>
1161
1162 </sect2>
1163
1164 <sect2 id="optimizations">
1165 <title>Optimizing the build</title>
1166
1167 <para>
1168 Many people will prefer to optimize compiles as they see fit, by providing
1169 <envar>CFLAGS</envar> or <envar>CXXFLAGS</envar>. For an
1170 introduction to the options available with gcc and g++ see <ulink
1171 url="https://gcc.gnu.org/onlinedocs/gcc-&gcc-version;/gcc/Optimize-Options.html"/>.
1172 The same content can be also found in <command>info gcc</command>.
1173 </para>
1174
1175 <para>
1176 Some packages default to <option>-O2 -g</option>, others to
1177 <option>-O3 -g</option>, and if <envar>CFLAGS</envar> or
1178 <envar>CXXFLAGS</envar> are supplied they might be added to the
1179 package's defaults, replace the package's defaults, or even be
1180 ignored. There are details on some desktop packages which were
1181 mostly current in April 2019 at
1182 <ulink url="https://www.linuxfromscratch.org/~ken/tuning/"/> - in
1183 particular, <filename>README.txt</filename>,
1184 <filename>tuning-1-packages-and-notes.txt</filename>, and
1185 <filename>tuning-notes-2B.txt</filename>. The particular thing to
1186 remember is that if you want to try some of the more interesting
1187 flags you may need to force verbose builds to confirm what is being
1188 used.
1189 </para>
1190
1191 <para>
1192 Clearly, if you are optimizing your own program you can spend time to
1193 profile it and perhaps recode some of it if it is too slow. But for
1194 building a whole system that approach is impractical. In general,
1195 <option>-O3</option> usually produces faster programs than
1196 <option>-O2</option>. Specifying
1197 <option>-march=native</option> is also beneficial, but means that
1198 you cannot move the binaries to an incompatible machine - this can
1199 also apply to newer machines, not just to older machines. For
1200 example programs compiled for <literal>amdfam10</literal> run on
1201 old Phenoms, Kaveris, and Ryzens, but programs compiled for a
1202 Kaveri will not run on a Ryzen because certain op-codes are not
1203 present. Similarly, if you build for a Haswell not everything will
1204 run on a SandyBridge.
1205 </para>
1206
1207 <note>
1208 <para>
1209 Be careful that the name of a <option>-march</option> setting
1210 does not always match the baseline of the microarchitecture
1211 with the same name. For example, the Skylake-based Intel Celeron
1212 processors do not support AVX at all, but
1213 <option>-march=skylake</option> assumes AVX and even AVX2.
1214 </para>
1215 </note>
1216
1217 <para>
1218 When a shared library is built by GCC, a feature named
1219 <quote>semantic interposition</quote> is enabled by default. When
1220 the shared library refers to a symbol name with external linkage
1221 and default visibility, if the symbol exists in both the shared
1222 library and the main executable, semantic interposition guarantees
1223 the symbol in the main executable is always used. This feature
1224 was invented in an attempt to make the behavior of linking a shared
1225 library and linking a static library as similar as possible. Today
1226 only a small number of packages still depend on semantic
1227 interposition, but the feature is still on by the default of GCC,
1228 causing many optimizations disabled for shared libraries because
1229 they conflict with semantic interposition. The
1230 <option>-fno-semantic-interposition</option> option can be passed
1231 to <command>gcc</command> or <command>g++</command> to disable
1232 semantic interposition and enable more optimizations for shared
1233 libraries. This option is used as the default of some packages
1234 (for example <xref linkend='python3'/>), and it's also the default
1235 of Clang.
1236 </para>
1237
1238 <para>
1239 There are also various other options which some people claim are
1240 beneficial. At worst, you get to recompile and test, and then
1241 discover that in your usage the options do not provide a benefit.
1242 </para>
1243
1244 <para>
1245 If building Perl or Python modules,
1246 in general the <envar>CFLAGS</envar> and <envar>CXXFLAGS</envar>
1247 used are those which were used by those <quote>parent</quote>
1248 packages.
1249 </para>
1250
1251 <para>
1252 For <envar>LDFLAGS</envar>, there are three options can be used
1253 for optimization. They are quite safe to use and the building
1254 system of some packages use some of these options as the default.
1255 </para>
1256
1257 <para>
1258 With <option>-Wl,-O1</option>, the linker will
1259 optimize the hash table to speed up the dynamic linking.
1260 Note that <option>-Wl,-O1</option> is completely unrelated to the
1261 compiler optimization flag <option>-O1</option>.
1262 </para>
1263
1264 <para>
1265 With <option>-Wl,--as-needed</option>, the linker will disregard
1266 unnecessary <option>-l<replaceable>foo</replaceable></option> options
1267 from the command line, i. e. the shared library <systemitem
1268 class='library'>lib<replaceable>foo</replaceable></systemitem>
1269 will only be linked if a symbol in <systemitem
1270 class='library'>lib<replaceable>foo</replaceable></systemitem> is
1271 really referred from the executable or shared library being linked.
1272 This can sometimes mitigate the <quote>excessive dependencies to
1273 shared libraries</quote> issues caused by
1274 <application>libtool</application>.
1275 </para>
1276
1277 <para>
1278 With <option>-Wl,-z,pack-relative-relocs</option>, the linker
1279 generates a more compacted form of the relative relocation entries
1280 for PIEs and shared libraries. It reduces the size of the linked
1281 PIE or shared library, and speeds up the loading of the PIE or
1282 shared library.
1283 </para>
1284
1285 <para>
1286 The <option>-Wl,</option> prefix is necessary because despite the
1287 variable is named <envar>LDFLAGS</envar>, its content is actually
1288 passed to <command>gcc</command> (or <command>g++</command>,
1289 <command>clang</command>, etc.) during the link stage, not directly
1290 passed to <command>ld</command>.
1291 </para>
1292
1293 </sect2>
1294
1295 <sect2 id="hardening">
1296 <title>Options for hardening the build</title>
1297
1298 <para>
1299 Even on desktop systems, there are still a lot of exploitable
1300 vulnerabilities. For many of these, the attack comes via javascript
1301 in a browser. Often, a series of vulnerabilities are used to gain
1302 access to data (or sometimes to pwn, i.e. own, the machine and
1303 install rootkits). Most commercial distros will apply various
1304 hardening measures.
1305 </para>
1306
1307 <para>
1308 In the past, there was Hardened LFS where gcc (a much older version)
1309 was forced to use hardening (with options to turn some of it off on a
1310 per-package basis). The current LFS and BLFS books are carrying
1311 forward a part of its spirit by enabling PIE
1312 (<option>-fPIE -pie</option>) and SSP
1313 (<option>-fstack-protector-strong</option>) as the defaults
1314 for GCC and clang. And, the linkers (<command>ld.bfd</command>
1315 and <command>ld.gold</command>) have also enabled
1316 <option>-Wl,-z,relro</option> (making a part of the GOT immutable)
1317 by default since Binutils 2.27. What is being covered here is
1318 different - first you have to make sure that the package is indeed
1319 using your added flags and not over-riding them.
1320 </para>
1321
1322 <para>
1323 For hardening options which are reasonably cheap, there is some
1324 discussion in the 'tuning' link above (occasionally, one or more
1325 of these options might be inappropriate for a package). These
1326 options are <option>-D _FORTIFY_SOURCE=2</option>
1327 (or <option>-D _FORTIFY_SOURCE=3</option> which is more secure but
1328 with a larger performance overhead) and
1329 (for C++) <option>-D _GLIBCXX_ASSERTIONS</option>. On modern
1330 machines these should only have a little impact on how fast things
1331 run, and often they will not be noticeable.
1332 </para>
1333
1334 <para>
1335 The main distros use much more, such as
1336 <option>-Wl,-z,now</option> (disabling lazy binding to enhance
1337 <option>-Wl,-z,relro</option>, so the <emphasis>entrie</emphasis>
1338 GOT can be made immutable), <option>-fstack-clash-protection</option>
1339 (preventing the attacker from using an unchecked offset from a heap
1340 address to modify the stack),
1341 <option>-fcf-protection=full</option>
1342 (utilizing Intel and AMD CET technology to limit the target
1343 addresses of control-flow transfer instructions; to make it really
1344 effective the entire system must be built with this option, Glibc
1345 must be built with <option>--enable-cet</option>, and the system
1346 must run on Intel Tiger Lake or newer, or AMD Zen 3 or newer),
1347 and <option>-ftrivial-auto-var-init=zero</option> (initializing
1348 some variables by filling zero bytes if they are otherwise
1349 uninitialized).
1350 </para>
1351
1352 <para>
1353 In GCC 14, the option <option>-fhardened</option> is a shorthand
1354 to enable all the hardening options mentioned above. It sets
1355 <option>-D _FORTIFY_SOURCE=3</option> instead of
1356 <option>-D _FORTIFY_SOURCE=2</option>.
1357 </para>
1358
1359 <para>
1360 You may also
1361 encounter the so-called <quote>userspace retpoline</quote>
1362 (<option>-mindirect-branch=thunk</option> etc.) which
1363 is the equivalent of the spectre mitigations applied to the linux
1364 kernel in late 2018. The kernel mitigations caused a lot of complaints
1365 about lost performance, if you have a production server you might wish
1366 to consider testing that, along with the other available options, to
1367 see if performance is still sufficient.
1368 </para>
1369
1370 <para>
1371 Whilst gcc has many hardening options, clang/LLVM's strengths lie
1372 elsewhere. Some options which gcc provides are said to be less effective
1373 in clang/LLVM.
1374 </para>
1375
1376 </sect2>
1377
1378</sect1>
Note: See TracBrowser for help on using the repository browser.