Category: linux

  • Embracing Open Technologies

    As a computer scientist, my software and hardware environment are the most critical part of my professional life. Furthermore, as a digital native, this landscape is the strata upon which many of my interactions are built. Just as in our physical life, our digital life should inhabit healthy surroundings. Thus, I’ve entered a period of deep contemplation about the services I use, and have started embracing the ethos of the GNU Project: the tech we use should reflect the values we hold. To this end, there are three gradual shifts to my computing environment: adopting Linux, migrating to GitHub, and deactivating my Facebook account.

    Ownership, Context, Responsibilities

    The first notion is one of ownership, and there are two aspects: licensing and data. Open-source licensing solves many distribution problems, allowing system-wide update managers that upgrade all my software at once, rather than being bombarded with popup windows for each application. However, not all software works this way, and so we must confront the ambiguous reality of digital rights management (DRM). Last month, I had to replace my motherboard, which triggered Windows to inform me that I may have been a victim of software piracy. This is because the license is tied to the physical installation of the software, rather than the intellectual property of the ability to use the software. App stores, such as the Steam Platform, solve this problem by tying the software to the user, rather than the installation. So long as DRM does not interfere with the portability of my intellectual property, I am comfortable with it.

    The cloud is a double-edged sword when it comes to ownership and portability. On the one hand, by distributing data across multiple servers, we gain reliability and ubiquitous access, at the expense of security. However, many cloud storage implementations (e.g., Dropbox) do not follow file transfer standards in place since the 80s, locking you into their proprietary service and software. In contrast, services like GitHub offer remote hosting, but do not lock you into their system – your data is always portable. Amazon MP3 also offers portability through un-encrypted, unlimited download MP3s. By adhering to standards, applications guarantee openness of data, so long as the standards are published and APIs are available.

    However, standards, even when published, require compliance and ubiquity, and it is here that Facebook fails. While championing the Open Graph protocol for data, Facebook follows the old Microsoft approach to standards: “Embrace, extend, and extinguish.” Messages are the clearest example of this. Every user on Facebook automatically has an e-mail address @facebook.com. This address though is not accessible via the standard IMAP or POP protocols, but can receive messages form any address, locking them into the Facebook ecosystem. We are digital sharecroppers, handing over content with false promises of ownership, constantly undermined by forced changes to benefit corporate interests.

    The context of these messages has also rapidly changed. While they were once analogous to e-mail, they are now analogous to chat, a widely different medium (with the Jabber/XMPP open standard giving a facade of openness). Wall posts have undergone similar context shifting – from the early days of wall-to-wall conversations, to status comments, to the timeline – and all the while not offering easily accessible search. Control over context is a critical right for digital interactions, a point argued best by danah boyd. With nearly one billion users, Facebook is a self-described “social utility”, which vests a social responsibility for their users. Given their rejection of this responsibility, I have deactivated my Facebook account, in favor of controlling my own context at my personal web page. It is my hope that future social networks will maintain a balance between the free-for-all of MySpace pages and the rigor of Facebook profiles.

    We also must have right to be forgotten. Facebook maintains negative-space data, and based on network structure alone it is possible to infer unreported profile data and unregistered users. Klout auto-computes their metric for all Twitter users, regardless of whether they have registered for the service, driving thousands of registrations just to opt-out, forcing people to hand over their personal data regardless of their participation. This is a major problem for all social applications. The power of social applications is mighty, and maintaining user control is critical, lest we unintentionally surrender our identity to others.

    Dimensions of Services

    While I’ve sketched out some specific considerations, there are a few general principles to extract. It’s important to note that the above arguments have little to do with the notion of privacy, highlighting that the principle of openness is very different from the principle of publicity. It is possible to have an open system which is private. For example, private GitHub repositories are inherently open: the fundamental data, the code, is all accessible to the user, while private repositories may keep them from the public. Privacy and openness are also separate from commercial interests and cost. GMail is a private, open, free, commercial system, adhering to the very same IMAP protocol as all other mail servers, but it is monetized for the company, despite storing private information and being a free service. When it comes to privacy, we must first start with openness, because privacy is built on trust. If you are not trusted with access to your own data, how can you trust that system with it?

    Contemplating services within this framework still has issues: how do I deal with Steam, which is a closed, private, commercial service? The last aspect is portability. While my software is locked to the Steam service, it is not locked to a particular computer. Richard Stallman even makes a well-tempered argument that Steam can be beneficial for the Linux ecosystem by offering certain freedoms of choice, and the company itself has made a huge commitment to open-source development – rapidly improving Linux graphics drivers.

  • Gentoo: Subversion not permanently accepting SSL certs

    Today I had a rather frustrating issue, as svn would not allow me to permanently accept an SSL Cert under Gentoo, rather just offering me the option to reject or accept temporarily.

    Error validating server certificate for \'xxxxxxxx\':
     - The certificate is not issued by a trusted authority. Use the
       fingerprint to validate the certificate manually!
     - The certificate has an unknown error.
    Certificate information:
     - Hostname: xxxxxxxx
     - Valid: from xxxxxxxx until xxxxxxxx
     - Issuer: xxxxxxxx
     - Fingerprint: xxxxxxxx
    (R)eject or accept (t)emporarily?
    

    After some Googling, I found Bug 295617: subversion won’t save bad certificates permanently with Neon 0.29. By this point Neon 0.28 had left the portage tree, so downgrading was not an easy option. However, a comment on Bug 238529 hinted at a workaround: build Neon without GnuTLS.

    To fix this issue, the easy fix is:

    echo \'net-libs/neon -gnutls\' >> /etc/portage/package.use
    emerge -DN subversion
    

    Neon should rebuild and all will be well!

    Error validating server certificate for \'xxxxxxxx\':
     - The certificate is not issued by a trusted authority. Use the
       fingerprint to validate the certificate manually!
     - The certificate has an unknown error.
    Certificate information:
     - Hostname: xxxxxxxx
     - Valid: from xxxxxxxx until xxxxxxxx
     - Issuer: xxxxxxxx
     - Fingerprint: xxxxxxxx
    (R)eject, accept (t)emporarily or accept (p)ermanently?
    
  • virtualbox-bin in Gentoo

    Some non-Linux-dork posts are in the pipe, but today I had issues getting VirtualBox up and running on Gentoo. Here’s some proper install instructions to work around Bug 283617. I’ll get to fixing the ebuild later this weekend.

    emerge virtualbox-bin
    chmod 4750 /opt/VirtualBox/VBoxNetAdpCtl
    chmod 4510 /opt/VirtualBox/VBoxSDL /opt/VirtualBox/VBoxHeadless /opt/VirtualBox/VirtualBox
    gpasswd -a youruser vboxusers
    

    After a logout/login, VirtualBox should appear in your Applications menu, and can be run from the command line with VirtualBox.

  • Installing a Brother Printer on Gentoo

    I’ve been migrating over to Gentoo from Ubuntu (more on this later) and today had the lovely experience of installing a printer. Since at least 2 other computers will be needing these instructions, here we are:

    Install CUPS

    1. emerge cups
    2. /etc/init.d/cupsd start
    3. rc-update add cupsd default

    Install Driver

    1. Download the LPD and PPD RPM drivers from Brother’s Linux driver site.
    2. emerge rpm tcsh
    3. rpm  -ihv  --nodeps  (lpr-drivername)
    4. rpm  -ihv  --nodeps  (cupswrapper)
    5. Verify the drivers installed correctly: rpm  -qa  |  grep  -e (lpr-drivername)  -e  (cupswrapper-drivername) (if this is your only rpm package, just use rpm -qa)
    6. Create a symlink to the filter: ln -s /usr/lib/cups/filter/brlpdwrapper[printer name] /usr/libexec/cups/filter/brlpdwrapper[printer name]

    Add printer

    1. In a browser, go to the CUPS server at http://localhost:631/
    2. Click Add Printer and enter a name. Location and description are optional, but user-friendly.
    3. On the next page select: Device: AppSocket/HP JetDirect
    4. On the next page enter: Device URI: socket://192.168.1.11 (substitute with the IP address of your printer)
    5. The final page has a list of printer manufacturers. Skip that and click Choose File. Select the proper PPD file at /usr/share/cups/model/(printermodel).ppd. Click next.
    6. Print a test page and enjoy!

    As an aside, I did stumble upon the Brother PPD source code, however there were no make files for my printer, nor were there any LPD drivers. It is unfortunate to have rpm or dpkg as a dependency for my printer drivers, but so be it – they’re lightweight packages on their own.

  • Dealing with ATi’s Linux Drivers

    ATi has gotten much better Linux support, but there is still much to be desired. Kernel upgrades pushed through the update manager tend to destroy the ATI kernel module. I’ve found the quickest, most painless way is to simply uninstall and then reinstall the drivers:

    sudo /usr/share/ati/fglrx-uninstall.sh
    

    Download the latest drivers from the ATI site: 32-bit and 64-bit <http://support.amd.com/us/gpudownload/linux/Pages/radeon_linux.aspx>.

    Open a terminal in the directory with the downloaded file (note: your exact file name may be different):

    sudo chmod +x ati-driver-installer-10-2-x86.x86_64.run
    sudo ./ati-driver-installer-10-2-x86.x86_64.run
    

    Install the drivers, restart the computer and type the following into a terminal:

    sudo aticonfig -f --initial
    

    Then restart X (Ctrl+Alt+Backspace) or restart the computer and all will be well!

    Update: If you experience black or grey screen artifacts in Firefox/Thunderbird using Catalyst 10.6 or higher, it may be due to the new 2D rendering system. To force use of the old XAA system run the following command after the initial aticonfig setup:

    sudo aticonfig --set-pcs-str=DDX,ForceXAA,TRUE
    

    Restart X, and all should be well!