Opened 18 years ago

Closed 17 years ago

#1019 closed defect (fixed)

Restore ALSA volumes reliably when udev is installed

Reported by: alexander@… Owned by: DJ Lucas
Priority: high Milestone: 6.2.0
Component: Bootscripts Version: SVN
Severity: blocker Keywords:
Cc:

Description

Problem: the ALSA initscript sometimes is started before udev creates nodes for ALSA soundcards present in the system, if their modules are loaded by the hotplug initscript. As a result, an ugly error message "No soundcards found" appears on the screen. And the device nodes appear just after that :( Official upstream recommendation: Use /etc/dev.d callback instead of the initscript to restore ALSA volumes. An example of such callback is provoded inside the udev tarball. However, the example is wrong. The provided example fails if the soundcard driver is not a module, and /usr is on the separate partition. More details: this callback is called from inside the udevstart binary from the udev initscript. /usr is not mounted at that point, that's why the callback scriptlet fails. We should develop an approach that works with all possible combinations of the following factors: 1) Driver is / isn't a module loadable by hotplug 2) /usr is / isn't a separate partition Side note: it seems that the only way to wait from the dev.d scriptlet for /usr to become mounted is by polling (spinning, sleeping and retrying) -- but the dev.d mechanism has been created just to avoid such polling. So the official upstream solution fails, and the dev.d callback gains nothing (instead of waiting for the device node we now wait for /usr). Feel free to prove that I am wrong. Of course, we can just declare all configurations with separate /usr as broken and unsupported, but that's probably not the best thing to do. If this bug is not fixed until 2004-12-01, it will be marked as "WONTFIX" here and transferred to LFS Bugzilla.

Change History (15)

comment:1 by bryan@…, 18 years ago

Could we create some sort of dual-dependency setup using hotplug and the dev.d directory? We'd create a dev.d handler for Alsa, and also a new /etc/hotplug/fsmounted.agent. Then udev would call our dev.d handler, and we could have the mountfs init script call the fsmounted.agent (with "/sbin/hotplug fsmounted").

Both scripts would check for a "flag" file somewhere. (It will probably have to be on a ramfs.) If it does not exist, then this script is executing first, so it creates the flag file and exits. If the file does exist, then the script deletes the file and calls alsactl restore.

There is a small race window between checking for the file's existence and creating it, but shell has no way to use O_CREAT | O_EXCL when opening files. We could write a short C program that would take a filename on its command line, open it with O_CREAT | O_EXCL, then exit(0) if successful, and exit(1) if not (there is no race condition this way). We could call that C program from each script.

This works in all cases for factors 1 and 2 referenced above. In any case, alsactl restore is only called after the devices are created AND mountfs has run.

comment:2 by kpfleming@…, 18 years ago

Adam J. Richter just recently posted an "flock" command-line tool to the linux-hotplug-devel list, expressly for the purpose of being used by hotplug-driven scripts. Take a look in the list archives for it; it's pretty simple.

This tool would do exactly what you suggest: lock a file, then run a specified command with the lock held. Any other attempts to lock at the same time would wait until the first lock is released, so the command run would have to be a script that would immediately check to see if it still needed to perform its task after obtaining the lock.

comment:3 by bdubbs@…, 17 years ago

Priority: highesthigh

P1 is reserved for security bugs.

comment:4 by DJ Lucas, 17 years ago

Milestone: future6.1
Owner: changed from blfs-book@… to DJ Lucas

comment:5 by DJ Lucas, 17 years ago

Status: newassigned

comment:6 by DJ Lucas, 17 years ago

Resolution: fixed
Status: assignedclosed

Created dev.d script that does a while loop in the background wating for /usr/sbin/alsctl to become availible. Happens behind the regular bootprocess, nothing is displayed on screen.

comment:7 by LFS-User@…, 17 years ago

rep_platform: OtherAll
Resolution: fixed
Severity: majorblocker
Status: closedreopened
Version: a-SVNb-6.1-pre1

The dev.d handler script is not working for me. In off-line messages with DJ, it should be. According to DJ, the book contains everything required to restore the volumes. Here is a relevant post to the -dev list containing information about the problem:

http://linuxfromscratch.org/pipermail/blfs-dev/2005-August/011025.html

I'm reopening the bug and marking it as a blocker so that we fix it (or let me know what *I've* done wrong in my configuration) before 6.1 is released.

comment:8 by chris@…, 17 years ago

I don't really know anything about how that startup script works, but I doubt that it's anything you did wrong because I have the same problem.

comment:9 by LFS-User@…, 17 years ago

Milestone: 6.16.2
Version: b-6.1-pre1a-SVN

This issue won't affect BLFS-6.1. It is, however, a show-stopper for any version of BLFS after 6.1.

comment:10 by DJ Lucas, 17 years ago

Suggesting to move setup to a udev rules file and script. The existing script is currently broken with updated udev (greater than 058). There is a backwards compatible binary that restores dev.d functionality, however, the udev rule is much easier anyway, and provides some limited assurance (maybe it won't change any time soon) that it'll work for the future releases of udev.

See a proposed change in this thread, however, some still want the bootscript restored to it's previous functionality (both may be doable with a yet undefined compromise if needs be). Further, with changes being discussed in LFS bugzilla regarding the udev rules files, the suggested 15-alsa.rules file may *have* to be created (and explained) in future BLFS.

http://archive.linuxfromscratch.org/mail-archives/blfs-dev/2005-August/011142.html

comment:11 by DJ Lucas, 17 years ago

The proposed change went into BLFS a while ago. It looks to be the solution. Not yet closing the bug, as the init script has not yet been layed to rest, though it seems all is well, just waiting on comments.

comment:12 by alexander@…, 17 years ago

Not closing because the script has bugs:

1) unconditional sleep at the top of the loop 2) use of /usr/bin/expr (with non-bash) while waiting for /usr

comment:13 by DJ Lucas, 17 years ago

Status: reopenednew

Requested chnages have been made in the scrpt. Checks OK

comment:14 by DJ Lucas, 17 years ago

Status: newassigned

comment:15 by DJ Lucas, 17 years ago

Resolution: fixed
Status: assignedclosed
Note: See TracTickets for help on using tickets.