Category: tech

  • The New Social Media Landscape

    Social media has been very important to my life in many ways. In particular, Instagram has enriched my life through friendships, relationships, and my development as an amateur photographer to establishing a photography business. However, there have been many changes that concern me going forward. While the moderation changes are alarming, it’s honestly the increasing presence of AI agents on the platforms that have me considering my engagement.

    Outside of photography, I am an artificial intelligence researcher, with a PhD in Cognitive Science and Complex Systems from Indiana University. While I was at Indiana, I took courses with several professors who monitored the spread of disinformation through coordinated bot networks on the web through the Observatory on Social Media (OsOME)

    This article is aimed for the general public, explaining how we got to our present difficulties on the web through ad-funded business models. Here, I focus on Google and Meta with their respective challenges in measuring engagement and controlling spam. I conclude the article with some thoughts on the affordances of various social media platforms with respect to three aspects of behavior on the web: content, connection, and commerce. Finally, I implore people to consider the business models of the tools they use, with a few suggestions on landing spots.

    Algorithms

    Algorithms and data structures are the two core elements of computer science. An algorithm is a way of doing something – a formal set of procedures for problem-solving. This contrasts with popular usage, which focuses on a specific class of ranking algorithms that determine the way information is presented in a “feed”.

    This popular usage has its origins with the PageRank algorithm that allowed Google to quickly usurp all legacy search engines, such as Yahoo and AltaVista. The magic ingredient was looking at the structure of the web and hypothesizing that more important pages will have more incoming links. However, this algorithm can be quickly exploited – by creating sets of pages that link into a particular page, the target page’s importance can be artificially inflated. Thus, in order to maintain search result quality, Google augments PageRank with other metrics when presenting results – a measure of trust for each domain, augmentation with keyword blends, etc. By changing the algorithm, publishers then attempt to exploit the new algorithm, starting an arms race known as “Search Engine Optimization” (SEO).

    When the goals of the search company are to provide high-quality search results and the goals of the publishers are to promote high-quality content, this game is productive – an ecosystem in which “content is king”. However, the motives of both publisher and search engine can be easily compromised. For publishers, the goal of high-rankings can overtake content quality, resulting in a web designed for Google, instead of humans. For search engines, an ad-based business model conflicts with users’ desire for high-quality search results by allowing paid intrusions to the ranking algorithm. Eventually, the scales tip to the extreme articulated by Google’s VP of Finance:  “[W]e can mostly ignore the demand side…(users and queries) and only focus on the supply side of advertisers, ad formats and sales.” 

    This process, of compromising product quality for users and extracting as much value out of business customers as possible, is known as “enshittification”, a term coined by the tech critic, writer, and copyright reform advocate Cory Doctorow:

    First, [companies] are good to their users; then they abuse their users to make things better for their business customers; finally, they abuse those business customers to claw back all the value for themselves. Then, they die. 

    Engagement

    In the social media space, enshittification has been happening gradually through algorithmic changes as Meta, TikTok, and X have changed their default algorithms from being a chronological timeline of the profiles that a user follows to paid placement, just as Google did. Each time the app is opened, thousands of potential posts are evaluated from paid advertisements to recommended posts to the slight chance of seeing something from someone you follow. All these changes are carefully monitored to drive engagement metrics that can be marketed to advertisers: clicks, time-on-site, comments, likes, etc.

    However, these metrics can be gamified by social media companies. This happened in the “pivot to video”: In 2015, Facebook encouraged news and media companies to post videos on their platform, rather than linking to written hosted elsewhere. Facebook cited increased engagement and many companies reduced their footprint – some going as far as removing their homepage altogether (Mashable, CollegeHumor, Funny or Die) and cutting their writing teams (Fox Sports, MTV News, Vice News). These companies all missed that they had lost control of their relationship to their audience. Once the transition to Facebook was complete, the company implemented a pay-to-play scheme for exposure. This ultimately led to the demise of sites such as CollegeHumor and Funny or Die. 

    In 2018, it was alleged that Facebook’s “pivot to video” exaggerated the success of videos on the platform by claiming they were watched over 900 times more than they actually were.  The lawsuit was settled in 2019 for $40 million, with Facebook admitting no wrong-doing, but the damage to legacy media was done. Ironically, the “pivot to video” had also damaged Facebook’s long-term metrics for “organic posts from individuals.” As users began to see Facebook as a platform for advertisers, rather than to maintain connections, they disengaged from the platform – the final step of “enshittification”.

    Spam, Bots, and Slop

    Spam is the sending of unsolicited messages repeatedly to a large audience for the purposes of advertising, propaganda, or other purposes. The lines between SEO-optimized content and spam have long been blurred. On social media platforms, spam is colloquially carried out by “bots” – automated social agents that create content to influence the algorithm so it promotes their message. These “bots” have been widely seen as a problem, as they create a negative user experience. Furthermore, nation-state actors have often used “bot farms” to spread misinformation on social media. 

    However, not all bots are malicious, making it hard to remove all automated activities on social media. For example, the National Weather Service has many “bot” accounts to communicate weather statements, watches, and warnings to the general public. Other areas are more gray – engagement bots can automate liking of comments or posts and appear indistinguishable from human usage. Any platform with an open API is subject to both malicious and benign use, as defined by the particular terms of service.

    To improve user experience and maintain an audience for ad-driven business models, social media companies have traditionally waged war on malicious bots. Ostensibly, this was part of Elon Musk’s rationale for purchasing then-Twitter. Unfortunately, by closing or rate-limiting APIs, independent tools to evaluate whether an account was a bot or not were forced to shut down. Internal teams working on misinformation and removing malicious bots had significant overlap. As social media companies increasingly abandon their in-house misinformation teams, the prevalence of bots has increased. These next-generation bots are also increasingly capable, as AI tools have given them multi-modal functionalities.

    However, the pressures of enshittification operate here as well – scrolling through spam content increases certain engagement metrics, such as time-on-site and posts served. While user experience suffers and other metrics decrease, such as likes and comments, bots can have their place in driving ad sales by creating fake engagement for the ad market to promote. The deployment of new AI production strategies has accelerated the accumulation of spam content, exemplified by AI “travel influencers”. Metrics without differentiation between bot and human engagement are useless, but the new norm for end users.

    Meta has recently decided to go all-in on this strategy, telling the Financial Times: “We expect these AIs to actually, over time, exist on our platforms, kind of in the same way that accounts do. They’ll have bios and profile pictures and be able to generate and share content powered by AI on the platform.” In implementation, these AI users are undifferentiated from normal users in their posts. In fact, they bear the blue “verified” checkmark on the platform, indicating they should be trusted more than other users. Additionally, they were unblockable, meaning that a human user could not “opt out” from the experience. 

    Meta backpedaled on this by removing the AI profiles, but reporting of the removal has emphasized the politicized aspects of these AI users, rather than the fundamental transgression: by creating artificial users, undifferentiated from human users, the “social” aspects of social media are compromised. This erodes any trust that users are real people, a long-standing conspiracy theory known as Dead Internet Theory, which is now Meta’s business plan. While some articles were concerned with the “digital blackface” of such profiles as “Brian – Everybody’s grandpa” and “Liv – Proud Black Queer momma of 2 & truth-teller”, these caricatures only intensify the core offensiveness of pitching these agents as equals for genuine human connection. The problem is not the aspects of humanity they attempt to mimic, but rather the mimicry itself.

    This embrace of AI has also been seen on Google, where instead of embracing the arms race with purveyors of AI-generated content to maintain high-quality search results, they have merely placed their own AI summary before any search results – paid or organic. This technique attempts to use retrieval-augmented generation (RAG) to then link to alleged citations supporting the AI Summary. Recently, the search engine placed AI-generated content above the original article in their ranked results, an alarming loss for both searchers and publishers.

    Trust

    The degradation of search and social media products as a result of the ad-based revenue model is apparent to anyone who uses Google, Meta, or X. As a consumer, we ostensibly have choice in platforms (or to disengage altogether). As a business, disengagement is tempered by where the eyeballs are, which involves compromising ourselves to untrustworthy partners like Meta, who inflate metrics and invent artificial users.

    As you evaluate your relationship with these platforms, it is helpful to think of three aspects: content, connection, and commerce. Underlying each one is trust – that information has been vetted, that users are “real”, and that business will flow. At present, many platforms fail on these aspects of trust. In the case of Meta and X, deliberately so as they remove internal misinformation teams. In Google’s case, unreliable AI Summaries remove trust in content. At Meta, company-owned AI profiles remove the trust that users are real, removing connection. At X and across all Meta platforms, algorithmic deprioritization of links outside their platforms reduces commerce.

    Fighting Enshittification

    Ultimately, the highest trust comes from owning your own distribution channels. However, “surfing the web” has been replaced with “scrolling” for most discovery activities. E-mails are rarely seen by end users, as spam filters have advanced. This makes the maintenance of your own website or newsletter feel like shouting into the void. 

    My recommendation is to observe what a platform’s monetization strategy is. All ad-driven platforms will be compromised in some way. Finding platforms that seek novel funding mechanisms, such as subscription-driven models, is a high priority. Otherwise, we only participate in large-scale advertising systems, rather than investing in platforms that suit our needs for trusted content, connection, and commerce.

    For social media, there are non-ad-supported solutions: Mastodon and Bluesky. These are largely Threads or X replacements and of the two, I have had more traction on Bluesky. The medium gives a public forum for events and discussion, but do not replicate key aspects of Instagram, including the Grid that offers artists a gallery and Stories that offer ephemeral content, leading to connection. I struggle to see how to drive print sales or portrait bookings through either platform, so mostly have fun with it.

    Both Mastodon and Bluesky offer a revolutionary service to their users: a default timeline that acts as a “no algorithm” feed – simply showing the posts of the users you follow, ordered by recency. Once you hit a critical mass of accounts though, the necessity of an algorithm becomes apparent. Bluesky allows users to select alternate feeds and create their own algorithms for ordering the information. For example, I subscribe to a feed called “The ‘Gram” that shows only posts from people I follow with media. Mastodon has a capacity to generate feeds from lists of users or hashtags, but does not allow algorithmic filtering in the ways that Bluesky does. 

    Another advantage of Bluesky is domain verification, providing consistent branding that points back to your own website. For my academic musings, follow me @jamram.net. For my landscape photography, follow me @exploringthefrontier.com. I think this connection of identity to the web at large is excellent and restores the ecosystem of the open web through social media. Finally, I prefer the granularity of the moderation tools at Bluesky, compared to the per-instance moderation at Mastodon. It provides a user with excellent control of their personal experience.

    For search engines, I have not identified a non-ad-driven search engine. While DuckDuckGo heralds their privacy model as a reason to use the platform, they still rely on ad placements for revenue, compromising search quality. Given my academic interests, I’ve started thinking about alternatives to this in the context of enterprise search. I think a lot of frustration with disinformation on the web at large could be solved by increasing search index quality, which involves controlling for SEO, social bots, and AI-generated content. It’s a steep hill that would rely on trust models being integrated with the indexing efforts. Always available for consultation on that topic.

    Conclusion

    In this article, I covered the birth of modern content ranking algorithms through Google’s PageRank and the subsequent “enshittification” of these services through ad-based business models. I identified engagement and spam control as two challenges for high-quality content that are compromised by the metrics of ad-based business models. Then I analyzed social media platforms with respect to three aspects of behavior on the web: content, connection, and commerce. Finally, I identified current tools that may give hope for a non-ad-based business model and identified a gap in the search engine space.

  • Embracing Open Technologies

    As a computer scientist, my software and hardware environment are the most critical part of my professional life. Furthermore, as a digital native, this landscape is the strata upon which many of my interactions are built. Just as in our physical life, our digital life should inhabit healthy surroundings. Thus, I’ve entered a period of deep contemplation about the services I use, and have started embracing the ethos of the GNU Project: the tech we use should reflect the values we hold. To this end, there are three gradual shifts to my computing environment: adopting Linux, migrating to GitHub, and deactivating my Facebook account.

    Ownership, Context, Responsibilities

    The first notion is one of ownership, and there are two aspects: licensing and data. Open-source licensing solves many distribution problems, allowing system-wide update managers that upgrade all my software at once, rather than being bombarded with popup windows for each application. However, not all software works this way, and so we must confront the ambiguous reality of digital rights management (DRM). Last month, I had to replace my motherboard, which triggered Windows to inform me that I may have been a victim of software piracy. This is because the license is tied to the physical installation of the software, rather than the intellectual property of the ability to use the software. App stores, such as the Steam Platform, solve this problem by tying the software to the user, rather than the installation. So long as DRM does not interfere with the portability of my intellectual property, I am comfortable with it.

    The cloud is a double-edged sword when it comes to ownership and portability. On the one hand, by distributing data across multiple servers, we gain reliability and ubiquitous access, at the expense of security. However, many cloud storage implementations (e.g., Dropbox) do not follow file transfer standards in place since the 80s, locking you into their proprietary service and software. In contrast, services like GitHub offer remote hosting, but do not lock you into their system – your data is always portable. Amazon MP3 also offers portability through un-encrypted, unlimited download MP3s. By adhering to standards, applications guarantee openness of data, so long as the standards are published and APIs are available.

    However, standards, even when published, require compliance and ubiquity, and it is here that Facebook fails. While championing the Open Graph protocol for data, Facebook follows the old Microsoft approach to standards: “Embrace, extend, and extinguish.” Messages are the clearest example of this. Every user on Facebook automatically has an e-mail address @facebook.com. This address though is not accessible via the standard IMAP or POP protocols, but can receive messages form any address, locking them into the Facebook ecosystem. We are digital sharecroppers, handing over content with false promises of ownership, constantly undermined by forced changes to benefit corporate interests.

    The context of these messages has also rapidly changed. While they were once analogous to e-mail, they are now analogous to chat, a widely different medium (with the Jabber/XMPP open standard giving a facade of openness). Wall posts have undergone similar context shifting – from the early days of wall-to-wall conversations, to status comments, to the timeline – and all the while not offering easily accessible search. Control over context is a critical right for digital interactions, a point argued best by danah boyd. With nearly one billion users, Facebook is a self-described “social utility”, which vests a social responsibility for their users. Given their rejection of this responsibility, I have deactivated my Facebook account, in favor of controlling my own context at my personal web page. It is my hope that future social networks will maintain a balance between the free-for-all of MySpace pages and the rigor of Facebook profiles.

    We also must have right to be forgotten. Facebook maintains negative-space data, and based on network structure alone it is possible to infer unreported profile data and unregistered users. Klout auto-computes their metric for all Twitter users, regardless of whether they have registered for the service, driving thousands of registrations just to opt-out, forcing people to hand over their personal data regardless of their participation. This is a major problem for all social applications. The power of social applications is mighty, and maintaining user control is critical, lest we unintentionally surrender our identity to others.

    Dimensions of Services

    While I’ve sketched out some specific considerations, there are a few general principles to extract. It’s important to note that the above arguments have little to do with the notion of privacy, highlighting that the principle of openness is very different from the principle of publicity. It is possible to have an open system which is private. For example, private GitHub repositories are inherently open: the fundamental data, the code, is all accessible to the user, while private repositories may keep them from the public. Privacy and openness are also separate from commercial interests and cost. GMail is a private, open, free, commercial system, adhering to the very same IMAP protocol as all other mail servers, but it is monetized for the company, despite storing private information and being a free service. When it comes to privacy, we must first start with openness, because privacy is built on trust. If you are not trusted with access to your own data, how can you trust that system with it?

    Contemplating services within this framework still has issues: how do I deal with Steam, which is a closed, private, commercial service? The last aspect is portability. While my software is locked to the Steam service, it is not locked to a particular computer. Richard Stallman even makes a well-tempered argument that Steam can be beneficial for the Linux ecosystem by offering certain freedoms of choice, and the company itself has made a huge commitment to open-source development – rapidly improving Linux graphics drivers.

  • Reflections on Privacy

    For many people, the primary privacy concern is the "no parents" concept – we don’t care who sees things as long as our "parents" don’t see it (where parents can be anyone we don’t want to see things – professional contacts, straight-edge acquaintences, terrorists, Julian Assange, etc.). This is what I term the exclusive privacy model: start with the public and begin cutting people out. However, this "public minus parents" idea doesn’t make sense. Online, you just have to logout to see this information. Offline, all someone has to do is talk. Facebook was originally marketed this way: here is a place to post information where only Harvard/Ivy League/college students can see it.

    This exclusive model is the most common privacy misperception. Information spreads, and by consciously recognizing this privacy becomes synonymous with trust. For example, you send an e-mail, confide in a friend, or upload a photo. This is private information, but is capable of being shared or forwarded in any number of ways, both online and offline (e.g., gossip). Its reach is mitigated by social convention and our own discretion.

    Google+ gets this inclusive privacy model right. First, it always explicitly states who an item is being shared with, not who is being excluded. When resharing an item that was shared with a limited circle, it notifies you of the original intent, highlighting the priviledge and trust placed in you. Just like an e-mail program’s forward button, each piece of content has a share button and the API will allow for all data to be federated outside of Google+. However, you also can disable the reshare for each posting. Someone else can always copy-paste your content, but it won’t be computationally linked to you.

    Privacy isn’t just about information, it’s about image as well. Google+ enables full control over your profile. Instead of posting to your wall or tagging you in a photo, people communicate with you directly through limited shares which do not appear on your public profile. Photo tags don’t appear in your albums until they are approved. A box in the upper right corner of your profiles allows you to view it as any other user. Voyeurism is all but eliminated, as you do not see a constant stream of external interactions. Facebook has some of these settings, but they are not as pervasive in the profile.

    The Next Step

    Google+ seems to have figured out a better way to handle privacy – both in terms of information and image – but the next social networking revolution is targeting: I don’t care who sees what I post, but I am self-conscious about overloading people with irrelevant information. My ideal publishing model wouldn’t be about circles of people, but streams of tagged content. If there existed a service where you could follow a person, but mute certain content streams (such as local events, politics, etc.), we’d have perfection. For example, friends in Kentucky don’t care about tornadoes around Bloomington. Professional contacts may be extremely interested in my philosophy and technology content, but don’t care about what concerts I’m going to. People who aren’t in the same circles (hometown friends, college friends, professional contacts, etc.) may share interests in internet humor or politics, while others consider unfollowing me because of it. None of this information is private, but I don’t want to innundate the world with extraneous chatter. If a social network can figure this out, that’s where I’ll plant my flag.

  • Computers

    Today I got my new computer and figured I may as well brag about specs 😉 Here’s a list of all my networked devices, they join Carlo Angiuli’s computers along with the other housemates’ laptops, iPods and phones. We have a total of 14 devices on our network – egads!

    UPDATE: 1/21/10 – OS upgrades for singularity and little-guy

    • singularity – my new primary desktop. 8x as fast as sweetness and consumes half as much total power.
      CPU
      AMD Phenom II X4 905e – 2.5GHz, 65W energy efficient
      RAM
      4GB DDR3 1333
      Chipset
      AMD 785G
      Graphics
      AMD/ATi Radeon HD 4200 (integrated)
      Storage
      3x500GB SATA II
      Optical
      Samsung 22x DVD-RW
      Operating System
      Gentoo Linux 2.6.32 & Ubuntu 10.04 “Lucid Lynx” & Windows 7 Professional
      Year
      2009
    • sweetness – my stalwart companion for 5 years. She’s still an excellent single core system, but we live in an era of immense parallelization, so it’s time to move on. Any new usage ideas?
      CPU
      AMD Athlon 64 2800+ – 1.8GHz, 89W, overclock to 2.4GHz for gaming
      RAM
      2GB DDR 400
      Chipset
      nVidia nForce 4 SLI
      Graphics
      nVidia 7900GS 256MB
      Storage
      160GB, 80GB SATA
      Optical
      18x DVD-RW, 20x DVD-ROM
      Operating System
      Gentoo Linux 2.6.32 & Windows XP Professional
      Year
      2004
    • media-pc – generic name avoids over-attachment. This beautiful box drives our 37″ panel upstairs and hosts our household media.
      CPU
      AMD Athlon X2 4850e – 2.4GHz, 45W energy efficient
      RAM
      4GB DDR2 800
      Chipset
      AMD 780G
      Graphics
      ATI Radeon HD 3450+3200 hybrid crossfire
      Storage
      2TB SATA II
      Optical
      22x DVD-RW
      Operating System
      Ubuntu 9.10 “Karmic Koala”
      Year
      2008
    • little-guy – the netbook, a Dell Mini 9 [review]
      CPU
      Intel Atom N270 – 1.6GHz
      RAM
      1GB DDR2 800
      Chipset
      Intel 945G
      Graphics
      Intel GMA 950
      Storage
      4GB Solid State
      Optical
      none
      Operating System
      Jolicloud Pre-beta
      Year
      2009
  • Reddit

    This week I signed up for reddit. My Google Reader had accumulated 5 or 6 subreddits, so I was pretty much using the site already. The same thing happened with Twitter – I was following 6 or 7 people through Reader and finally decided it was time to give back.

    The site is basically a much better, more filtered version of Digg. It’s not as good-looking, but it’s way more functional. You subscribe to different topics you are interested in and the main page aggregates all these “subreddits” on the main page, so the articles that show up should at least be relevant. You are able to vote articles up or down and comment. You can also submit new articles, or submit a general question for fellow redditors to use. There are 5 tabs on the top of each reddit: what’s hot, new, controversial (voted equally up and down), top (best of), saved (your bookmarks in that reddit). These can lead to really cool hive mind things, like a list of best TED talks.

    My reddit subscriptions are mostly for tech stuff: reddit.com, politics, technology, programming (proggit), science, linux, cogsci, Python, javascript, Ubuntu, hardware, compsci, cyberlaws, tedtalks, java, PHP. If you join up, I’m JaimieMurdock.

    Because reddit is not responsible for lost productivity, I’ve set a 20 minute limit for every 6 hours in LeechBlock, which is in effect all day every day. It takes some enforced self-control not to be consumed 😉

  • Netbook Overview

    Netbooks are a new computer form factor designed to provide extreme portability at a low price for wireless access anywhere. They achieve this through a small chassis and low-power hardware. Netbooks are secondary computers, aimed at people who already have a kickass desktop or a bulkier laptop and just want something to can bring to class, lounge with on the couch, or browse with at the coffee shop.

    Six months ago I got a refurbished Dell Mini 9 netbook for $240 from Dell Outlet. The size still elicits a “wow” – at every lecture this semester neighbors have asked if they can play with it. (sometimes during the talk!) The overall netbook market has settled on a 10″ standard.

    General Notes
    Hardware
    There are dozens of netbooks and almost all of them have the exact same specs: 1.6GHz Intel Atom processor, 1GB RAM, no DVD drive, 9-10″ screen, integrated graphics, 1024×600 resolution, and a webcam of some sort. Things that will vary widely are battery life and keyboard size.

    The lack of a DVD drive will probably bother some people, as will the presence of integrated graphics. Both of these are non-essential to the netbook philosophy, which dictates that everything important is on the internet. If not, you need a more powerful computer anyways. nVidia’s Ion platform aims to change the graphics issues, but has seen slow adoption.

    Operating Systems
    Windows 7 has been touted as the best choice for a Windows netbook experience. Although not formally launching until October 2009, the release candidate can be easily acquired. The redesigned task bar helps promote minimalism and new optimizations make it run smoother than previous Windows incarnations. Vista is all but impossible to use functionally. XP is offered on almost all netbooks, and is the preferred choice over Vista.

    I prefer Linux, which has finally matured enough for mainstream use. Several vendors ship with Ubuntu Linux. I find that Ubuntu to be far more usable than Windows, especially the painless updates and streamlined software download and install process. I tried the Ubuntu Netbook Remix (UNR) which is optimized for small screens, but I wasn’t a fan of the particular application launcher. My final configuration was a self-remixed version of the Desktop Edition, adding the uses the Maximus, Window Picker, and Human Netbook Theme packages.

    I just got an invite for the Jolicloud alpha, which looks like a promising netbook OS with a much better app launcher. I’ll update with impressions later.

    Specific Models
    Dell Mini 9
    My netbook is the Dell Mini 9 – “little-guy”. It is about the same size and weight as a standard hardcover book (see above). The display is an amazing LED backlit display with great colors and contrast – it has been favorably compared to the MacBook Pro displays. The build quality is very solid. There are no moving parts in the entire chassis due to the low-power Atom processor enabling a fanless design and the use of a solid state drive. In addition, every component is extremely easy to access – requiring only the removal of 2 screws to get to the wifi card, hard drive and RAM (see below). On the standard 4-cell battery it gets about 4 hours of battery life on full brightness with wireless enabled.

    There are two caveats. First, the keyboard is wonky due to the 9″ form factor. Dell decided to sacrifice the standard layout to ensure that the letters were near normal size. My typing speed is about 20% slower than on a full-size keyboard when punctuation is required. Also the solid state drive in the base model is only 4GB. Since most of my data is on the cloud, this isn’t a big deal. If you need more space, there is an SD HC slot on the side and the hard drive is easily replaceable.

    The base Dell install of Ubuntu 8.04 is okay, but not excellent. The OS is bundled with the Yahoo web apps suite linked everywhere, which was frustrating since I live on the Google cloud. The application launcher was very well done. Software updates were a pain since Dell used an LPIA Linux kernel instead of the more ubiquitous i386 kernel. In usage, there is no difference, but it does mean that the package manager is severely limited. You can also order the computer with Windows XP.

    All in all, I think the form factor of the Mini 9 is well worth it, and doubt I would toss it around as much if it were slightly bigger. The keyboard can be overcome, especially when you recognize that it is meant to be a secondary computer. If I need to do serious work, I’ll get on my desktop.

    Dell Mini 9 next to an exemplar hardback Dell Mini 9 internals

    Other Reccomendations
    LifeHacker has an excellent Hive Five article on netbooks: Five Best Netbooks. It focuses on 10″ models, which seem to be the emerging standard. The Asus 1000HE has received much praise.

    If you are interested in a 9″, the Dell Vostro A90 is the same as the Mini 9. The Mini 9 has been removed from the main page, but it appears you can order it here.

    Purchasing Notes
    If you are an IU student looking to purchase a netbook, remember the IU Dell Partnership Program. You’ll be asked to authenticate via CAS and then taken to a custom Dell page with 7-12% discounts on all items.

    Everyone should look into the Dell Outlet. The prices are severely lowered and all computers come with a standard 1-year warranty. My Mini 9 came from the outlet, and I have been completely satisfied.

  • Firefox Extension Mania!

    This month I discovered Firefox extensions! I really hate bogging down my browser, but these are incredibly useful. Know any others? Link it in the comments!

    LeechBlock (extension)
    This is the best productivity extension ever. It allows you to list a few domains to block (twitter.com, facebook.com, youtube.com, reader.google.com, …) and set up a time period to block them. BUT it also has an option to allow limited access. I have it set up to allow me on my sites for 10 minutes an hour. This keeps me on task, but allows reasonable distractions to clear the mind. It is important to check the “Actively block these sites” option, as that will redirect any already open tabs to these timesinks. I like redirecting to this undistraction page.

    GreaseMonkey (extension)
    GreaseMonkey is one plugin that I’ve actually stopped using, because it does tend to slow down browsing and can be used maliciously. However, some people may find FB Purity useful. It hides all the annoying quiz applications from showing up in your Facebook newsfeed!

    KeyConfig (extension)
    KeyConfig is a small extension that allows you to rebind and create new keyboard shortcuts. Things I have done:

    • full screen to F2 – much more convenient placement
    • Evernote Web Clipper to Ctrl+E – much quicker note-taking, see more on Evernote below
      Add new key with this code:
      evernote_addSelectionToEn3(null);

    • bit.ly sidebar to Ctrl+B – quick distribution of cool sites through Twitter
      Add new key with this code:
      content.location = “javascript:var%20e=document.createElement(‘script’);e.setAttribute(‘language’,’javascript’);e.setAttribute(‘src’,’http://bit.ly/bookmarklet/load.js’);document.body.appendChild(e);void(0);”

    • any bookmarklet can be added with:
      content.location = “(bookmarklet code)”

    Since I got my netbook, my cloud computing presence has grown exponentially. Syncing between the Sweetness and Little-guy just takes too long to set up and introduces an administrative task I don’t want to deal with. The following extensions increase the utility of the cloud exponentially.

    Delicious (extension) (official site)
    Delicious replaces my bookmarks menu with an easy to use tagging infrastructure and note taking system accessible through Ctrl+D. By putting my bookmarks on the cloud, I can access them from any computer (useful for continuing research projects in the library). The social networking aspect didn’t seem like a big deal to me, until I started actually using it. Typically our friends share our interests, so it’s not surprising that we would find their bookmarks interesting.

    Finally, the Delicious plugin allows you to sync quicksearches across computers (tag things with shortcut:). I have a quicksearch setup to search my delicious bookmarks and to bring up my bookmarks by tag, dramatically increasing the utility of my bookmarks by limiting my search domain to sites I have already flagged as useful. (my quicksearches – feel free to save the interesting ones to your Delicious 🙂 )

    Evernote (extension) (official site)
    OneNote is a program that Microsoft just got right. Unfortunately, it’s Microsoft and I’ve switched to the Linux world. OneNote was integrated into every part of my computng life – anytime I would put a note into a little text file, it would get tossed into my OneNote instead (phone numbers, quotes, observations, guitar tabs, letter drafting, etc.). Win+N (new note) became my most used shortcut. It is sorely missed – but Evernote has done a respectable job of replacing it.

    Evernote is like Onenote in a lot of ways, but it uses a tagging system in lieu of tabbed notebooks and is more ubiquitous, with native clients on almost every platform (Win, Mac, iPhone, Blackberry, Windows Mobile, Web). Unfortunately, there is no native Linux client (the Wine version works, but it’s got some ugly buttons). How is it useful to have evernote on your phone? Notes on the go, recording song ideas for later use, taking pictures of receipts or things you want to reference later – the uses are legion.

    Back to Firefox though – the web clipper is an awesome extension, as you can highlight any section of a site, click the elephant, and voila! it’s been added to your notebook with a link to the original source. Great for compiling research.

  • Pictures


    On Friday I got the Canon PowerShot SD780 IS (Amazon). I’m really pleased with it – the form factor is amazingly small and it feels sturdy. Despite its small size, it packs a ton of features – 12 megapixel sensor, 3x optical zoom, view finder, 2.5″ LCD, HDMI out and a full gamut of image options that I’ll be exploring soon. Thus far, it earns high accolades.


    I’ve been using Google Picasa to do simple photo editing (just crops and straightening so far). I like the suite’s usability and the hassle-free uploading to Blogger and Picasa Web Albums (and Facebook on Windows). One strange thing about Picasa is that it doesn’t actually save edits to the file directly – rather it stores the transformations in library files. This preserves the originals and saves disk space, but can be confusing when you open the file in a different program and notice your edits are gone. The Export button saves the edited pictures to your hard drive. The other export options also just send the edited picture. It’s a good system, but something to be aware of if you want to move to other image software.


    More can be found at my Picasa web album.

  • How I Do Google Reader

    Google Reader is the single best tool on the Internet. There is a ton of news and information on the Internet, but people don’t know how to manage the onslaught of constantly changing content. Instead of taking advantage of the real-time nature of the web, they continue to utilize print, television and radio to get their current events, humor, music news, research publications, etc. often wasting time waiting for stories that interest them.

    In Google Reader you subscribe to websites you are interested in, just like a magazine subscription. There are several ways to subscribe:

    1. In Google Reader, click the add a subscription button and enter the website URL or search terms and Google will find the feed for you.
    2. Just look for the RSS Icon and click on it. In Firefox this will bring you to a view of the feed. Just select Google from the list of subscription options and then click subscribe now.
    3. This icon may also appear in your browser’s address bar. Click it and you will be given subscription options.

    In addition to giving you relevant information, Google Reader has a social aspect which allows you to share articles with your friends and see their shared articles. It’s a great way to foster discussion and helps us come across content we would not otherwise see. These shared items can be imported to Facebook, further extending their reach.

    Google Reader has been a boon for my productivity – I no longer compulsively check sites for updates, they come to me. The trends feature allows me to look at what I’m really reading and determine whether my subscriptions are really worth it. Shared items have promoted hundreds of conversations. My morning routine now begins with an hour on Google Reader, like Granddad’s newspaper reading.

    How do you pick good feeds? Well you can start with websites and blogs you normally visit. From there, add your friends blogs and put them in a Friends folder. As you add blogs, consider their volume and quality. The best feeds are low-volume and high-quality, where nearly every article is a must-read. Some feeds are meant for scrutinizing, while others are meant for skimming headlines.

    Here are some essential feeds: (for more check out my Master Subscription List)

    General News
    Yahoo! News Top Stories – aggregate of AP, Reuters and AFP headlines. Feed just prints leading sentence and picture. Gives a good overview of what the mass media is talking about. High volume, low clickthrough.
    Boston Globe: The Big Picture – The best photojournalism, about 3-4 slideshows a week.

    IU – local awareness
    IU General News – feed from the Indiana.edu homepage
    Indiana Daily Student – mostly for lulz

    Politics
    First Read – MSNBC’s political analysis blog, lots of volume. Good feel for what’s going on in Washington right now.
    The Economist: InternationalThe Economist is one of my favorite print magazines, and the international section is the best part of it.
    The Economist: The world this week – Worth subscribing to regardless of interest in politics, as it provides an excellent summary of the world each week.
    GOOD transparency – great section of an online magazine with infographs (example: first 100 days of the presidency from Roosevelt to Obama )

    Tech
    Ars Technica – moderate volume, high quality. Great articles on everything technology

    Cognitive Science
    Mind Hacks – AMAZING blog about all things to do with the mind. They have a post every other week entitled “Brain Spikes” that just link to a ton of interesting articles.
    TED Blog – Blog from TED Talks with more information on talks and generally cool stuff

    Productivity
    LifeHacker – High volume blog filled with cool programs and ideas to help boost productivity
    The Simple Dollar – Great blog on personal finances. Make sure to check out his free eBook – “Everything You Ever Really Needed to Know About Personal Finance on Just One Page”
    Zen Habits – Excellent productivity blog which spawned the Zen to Done (ZTD) system, a more practical version of Getting Things Done (GTD). See how I’ve implemented part of it: Doin Thangs.

    Final tip: Review your feeds every month and try to eliminate 10% of your feeds to reduce your volume. I often find myself unsubscribing from great feeds because I’m not actually reading them, and because there are friends who will fill that gap through shared items.

    If you know of more useful feeds or have any Reader tips, feel free to comment!

    Jaimie Murdock
    Shared Items
    The Long Cut
    Master Subscription List