Wednesday, September 9, 2009
My Father's Obituary
Edward Michael Bouzide, of Oconomowoc, Wisconsin died Tuesday, September 8, 2009 at Memorial Hospital of Oconomowoc.
He was born on October 22, 1922 in Chicago, Illinois, the youngest of Emil and Lilly (Wardy) Bouzide’s seven children. Edward was a chef, working in a long career at various restaurants in Oconomowoc and the Milwaukee area from 1954 to when he retired at age 80. He took pride in how much people liked his cooking.
Edward graduated from Lindblom High School on Chicago’s South Side in 1940. He attended one year of college. Edward joined the United States Army Air Corps near the end of World War II and trained as a ball turret gunner.
Edward moved to Oconomowoc in 1954. His first experience as chef was as a partner at the Strand Annex on Wisconsin Avenue in Oconomowoc, which was part owned by his sister Alice’s husband George Gazell. He also met his wife-to-be Marlene Jaki, originally from Milwaukee, shortly after he moved to Oconomowoc. He was married October 9, 1954 in Okauchee, Wisconsin. He later opened his own restaurant, Eddie’s Lunch, in Oconomowoc around 1959.
Besides his dedication to his profession, Edward also had a passion for historical reading, loved music and occasionally played piano, and worked with wood. Later in life he also became an avid self-taught knitter.
Edward is survived by his loving wife, Marlene. He was also very proud of each of his three children by whom he is also survived: daughter Lissa (Edward Gilaty) of Chicago, and his sons Paul (Catherine) of Chicago, and John (Diane) of Ashippun, Wisconsin and preceded in death by his grandson Kenneth Christenson. He is also survived by his sister Alice and preceded in death by two other sisters and three other brothers.
Visitation will be held on September 11, 2009 at the Schmidt and Bartelt Funeral Home in Oconomowoc from 1PM to 3PM with a service to follow. In lieu of flowers, donations can be made to the charity of your choice.
Tuesday, May 19, 2009
VRM 1 - What is Vendor Relationship Management
As a technologist, VRM is something I'm now (afterwards) feeling an even more passionate enthusiasm (imagine!) for and want to make continued contributions to. It's essentially about humanizing commerce. So in the spirit of initial contribution, I thought it would be a good idea to blog out an N-part series on VRM to capture and clarify my understanding. Thanks and a hat tip to Tim Bray for the N-part format I often enjoy in his blog.
Without further adieu, here's my take...
What is VRM?
Vendor Relationship Management is a concept (I wanted to say “business model” and “technical architecture” but they seem too limiting so I’m going with the fuzzy term for now) that is centered on changing the relationship between customers and sellers. (And even these terms are too limiting, but are probably illustrative enough for this introduction.) Sellers are typically called “vendors” in order to provide a contrast with Customer Relationship Management (CRM), but as Doc points out this is a term of art from the technology business, so "vendor" isn't perfect either.
For the purposes of this discussion the current state of the art of the customer-seller relationship can be characterized by the following two points:
1. Sellers broadcasting promotion information about their products and services via advertising. But advertising is:
- An extremely inefficient way to link buyers with sellers. Most users ignore and resent online ads, perhaps even – or especially in some cases - “contextual” ones. The fact that paid advertising subsidizes the cost of providing products and services in the world of electronic (and now digital) media and “content” is fairly abstract to people (despite it being around since the early radio and TV days) in that it’s usually not “front of mind” when making decisions about what media products to pay attention to.
- Difficult at best to correlate with individual actions – in particular purchase events - that may have been influenced by the campaign
- To the extent that correlation can be enabled through technology (such as location-based advertising for example) it is done through appropriation of the customer’s valuable personal information (notably location, attention) without corresponding compensation.
- Helps to foster a “siloed” business model and interaction process that is painful for users since they need to provide their personal data (minimally name, address, credit card) to each vendor in order to facilitate e-commerce. That this process of “silofication” also helps to lock in consumers by making it annoying at best to switch to a competitor only reinforces and builds inertia into this model on the vendor side.
- Inaccurate, which is a consequence of most CRM attributes being generated analytically and in ad hoc vendor-specific ways).
VRM aims to change this state of affairs (“level the playing field”) first and foremost in order to empower individuals and “re-humanize” the marketplace. But there are powerful incentives to business:
- Connecting sellers with buyers more efficiently
- Improving vendor’s supply chains to lower costs (matching production and distribution to demand)
- Opening new business opportunities to match sellers with buyers on the individual’s behalf. This latter role is often referred to as a “fourth party” role (where the individual is the “first party”, the vendor or supplier is the “second party”, and agents who work on the vendor’s behalf, such as advertisers (or advertising search engines) are “third parties”.
VRM Architecture
The key technical or architectural idea behind VRM is the notion of the Personal Data Store (PDS). To understand the PDS idea, one first needs to understand what kind of information is maintained about you for purposes of building and maintaining a (currently one-sided) commercial relationship.
The sort of information that’s useful to a seller in a commercial relationship (especially an online one) and that is maintained in a CRM system consists of obviously essential stuff like your name and address (for creating a unique and persistent “identity”) and commerce-essential but sensitive stuff like credit card information. But other useful information includes demographic stuff like age, gender, “income level” and so on. This latter “model of you” information can be subcategorized in a myriad of ways some of which are unique to what a vendor is offering and others are the bailiwick of marketing professionals. I’m sure you can think of a handful. And often this isn’t directly asked for or provided but rather is derived from information you do provide such as where you live. Do you like being categorized this way?
But there is other, more dynamic personal data that is quite valuable to connecting sellers with buyers. Think of which websites you visit, in which order, and what duration is spent there. Think of your current location. Or a trail of location “breadcrumbs”. Or an accumulation of such paths from which patterns of your day-to-day life can be analytically gleaned.
Don’t think for a minute that there aren’t well funded organizations working hard to get this information in order to create “personalized” ads that (they think) will better fit your circumstances and “delight” you enough to incent a purchase event. Does this feel intrusive and creepy? Are you aware of how valuable this information is?
I used the term “identity” before to describe your name and address. All that other provided and derived “model of you” information is associated with your identity. But in the digital world, identity can be fluid. You can have multiple identities, one for work, one for play, one for personal business. Sometimes these identities can serve the purpose of anonymity (nobody online really knows that “WhiteSoxFan2121” might be Barack Obama for example). This is important because without some support for anonymous non-correlatable identity or identities, most people would be extremely reluctant to share some of the more personal and private static and dynamic information currently unavailable to vendors and only guessed at.
The “Personal” aspect of PDS turns this around. Instead of this information being spread around in the CRM systems of each of the vendors you interact with, there is only one PDS, only one “model of you”. One dataset of record that contains any and all of the personal static and dynamic information that was discussed above. A dataset that you maintain and control.
I’ll talk about control in the next paragraph. But first think about some other ramifications of a personal set of market-useful information that you create and maintain. One important benefit is that there’s only one such dataset. No need to enter your name, address and payment information on multiple sites multiple times. Another is that you maintain it. No analytically derived and presumptions sub-classifications. You can maintain your political affiliation for example. Even your medical records.
"Whoa, hold on now" I can imagine you saying. Not inclined to put stuff like that into a computer where it can be fluidly shared with other organizations on the internet for goodness sake? Can’t say I blame you. That’s where control comes in. The other key property of the PDS is that you control who gets to see what information. Not every piece of information is needed or useful in every commercial transaction. But for example if you’re looking for a restaurant, knowing your location and what food you like and dislike can be very useful. Note in particular that your name and address probably isn’t. If this is a transaction with a medical professional, your medical records may be essential. How this can work in a secure way – and probably in a more secure way that your medical records are handled now in some medical provider’s CRM-like system - brings us back to identity.
There’s been a lot of work in the last four years on the technology side in coming up with more secure and more easy-to-use network-oriented identity systems. This work is now coming to fruition and will allow individuals to create identities and associate them with a dataset like a PDS, along with sets of rules that govern which pieces of information are shared with which organizations. Mechanisms like selector-based Information Cards with encryption baked in, and emerging web-based location and data format standards like URL-based Extensible Resource Identifiers (XRI).
The adoption of these new identity models is accelerating. Selector based identity is provided in Windows Vista and is available for earlier versions, and open source versions are available for Linux and Mac and are working their way down to the mobile device space. This coming ubiquity is making the fine-grained control of personal information associated with individually maintained identities realistic. In other words network-oriented identity management is the key to a truly secure PDS.
My conclusions:
• I don’t think it’s a stretch to say that VRM is the killer app of Identity Management.
• The Personal Data Store is the architectural cornerstone of VRM that provides an unassailable technical constraint to the unauthorized use of any and all personal information in e-commerce and thereby enables a user-centric model for efficiently connecting sellers with buyers.
VRM Adoption and Challenges
The VRM concept is now at a stage where technology that supports the PDS architecture is no longer the limiting factor to adoption. The most significant work now lies in making the case to individuals as well as to sellers that a VRM-infused commercial ecosystem serves their interests better than the status quo.
The tremendous inefficiency of advertising as a method for associating actionable buyers with sellers, the data cleansing and other lifecycle-oriented model fidelity issues associated with traditional CRM, and the consequent MLOTT ironically makes pitching the VRM idea to sellers (and “third party” agents working on their behalf) easier than making the case to individual buyers. There is an increasing amount of convincing qualitative and quantitative data (traditional as well as VRM-specific) available to make a compelling case to sellers. The challenge here is the chicken-or-egg one: there needs to be a meaningful number of individuals providing gestures of impending purchase intent that sellers can associate with before traction on the seller side can take hold.
Before specifically addressing the individual side of the relationship, it’s worth noting that there are some models for promoting the VRM idea in a bottom-up way that can provide evidence of adoption to sellers in a low cost and low risk manner. One is to start with some trial communities and begin to cobble up “Personal RFPs” out of readily available web technologies like blogs and microformats, web spiders, Twitter/RSS/XMPP messaging and existing cookie-based identity. Another is to provide user-centric search capabilities that work on behalf of individual buyers (as opposed to the current search model of Google and others who work on behalf of the seller by mining personal information and creating paid “contextual” advertising) and connecting them with sellers. This is an example of a “fourth party” service working on behalf of the individual.
Another way the VRM community is working to spur adoption on the vendor side is to build and demonstrate a cloud-based PDS architecture that co-exists with existing vendor CRM systems. This particular architecture is being implemented by the Mydex Community Interest Company in the UK. It works by using user-verified known-good attributes from various CRM systems and incorporating them into a logical dataset of record that is federated from the vendor's CRM systems. This logical or “virtual” PDS is surrounded by a strong identity layer that allows the user to grant or disallow access to select attributes for select commercial transactions.
This evolutionary approach can go a long way toward reassuring vendors that their interests are being addressed in a way that doesn’t force them to rearchitect their e-commerce systems and in a way that lets them evaluate the business benefits of VRM in a low-risk way.
But challenges to adoption remain on the individual buyer side.
One challenge is that there is no single compelling benefit to the VRM PDS-plus-identity approach. Among the orthogonal benefits are:
- A single point of creation and maintenance of one's personal information with an optimal incentive for high quality and timeliness.
- The advantages of maintaining a digital record of rich and highly private information about oneself without concern about it being shared with anyone but whom one wishes to share it with.
- A much improved experience in finding and buying products and services that better fits one's unique personal context and whose price takes into account the value of the personal information one provides.
- The ability to keep transactions anonymous and uncorrelated with other transactions if desired.
- An ability to build relationships of mutual respect and benefit with sellers.
Another big challenge is associated with devising and implementing a user experience that makes it easy to create, manage and share the fine-grained information in a PDS in an easy to use and easy to understand way. I still think this is the biggest challenge.
This leads naturally to the much-needed evolution of fourth party services in addition to or combined with user-centric search. A cloud-based model for trusted management and control of Personal Data Stores on behalf of a user seems likely to be part of such offerings and is a fruitful area for innovation in PDS management user experience.
An ability to combine online applications and services with shared PDS information in unique ways are still another “developer side” space for innovation in fourth party services.
So another key to adoption is the development of fourth party services allied with the individual but both driving and responding to the pace of adoption of VRM concepts on the vendor (and vendor’s third party) side.
And the key to successful adoption of fourth party services is Trust.
Friday, April 3, 2009
Geek Talk and Sun Sets
The true identities of the participants have been elided for workplace and personal liability reasons. Plus I get to expose you to the way-fun LOLCODE website (thanks Ian!).
Verbatim-ish email transcript below - read from bottom (thanks and a hat tip to Bob Frankston for this semi-effective but easy-to-blog narrative device):
--------------------------------------------------------------------------------
Firstly apologies in advance to Dan.
No humorous lolcode links here either, so don't worry. I'm nothing if not humorless. I even changed the Subject: line to help steer into seriousness a bit.
But I think this *serious response* (to what might be a frivolous troll-like point admittedly) is or at least ought to be of interest to nearly everyone on _Global_Architecture_Ext mail list and as such constitutes (part of) my job responsibility. So I'm sending this forth without guilt. You may of course choose to ignore, that's your prerogative.
---
What I recommend is *not* "killing" or "switching" languages at all. The idea of killing or switching would seem to imply that we should use one and only one language.
Imagine we have a JVM (and we do for a goodly fraction of all our running and planned software). Doesn't matter *whose* JVM as long as it runs standard bytecodes (which rules out the Android "JVM" BTW).
Then imagine a world where some of our source is written in:
1. Java the Language. This is important for all our existing Java source as well as for new development where some of our devs only know Java the Language.
2. Groovy (and boy do I hate the name but love the language in nearly equal measure) as a high-signal/low-noise language where clearly readable intent in the source is a paramount virtue.
3. Scala where a workload in the object domain both permits and demands leveraging lots of cheap commodity threads and cores to achieve an acceptable throughput and cost of scalability.
Yes I could mention JVM-hosted languages like JRuby and Jython and Clojure (and I'm not opposed to any of those) but they aren't as deeply integrated into the existing Java object model (i.e. all classes implicitly descending from Object) and having access to all of the familiar and effective Object and Class goodness is very powerful from a Java developer standpoint, while still obtaining the benefits of JVM-hosted code that allows existing development and deployment tools and platforms to be leveraged *without change*.
And to those of you who are .NET devs, the concept I'm promoting: developers with a ***multiple language toolkit*** running on a common VM and runtime should also be familiar and relevant to you.
-Pauly
-----Original Message-----
From: Dan
Sent: Fri 4/3/2009 3:22 PM
To: Ian; Boris; Paulytron; Lee; AnotherFellowPaul; AlexY; GregH; _Global_Architecture;
Subject: RE: Oracle and HP together again.
I am sorry, but can either someone please remove me from that global
arch ext mailing list or start to post only mails on that list that
are _really_ relevant to _all_ recipients on that list...
Thank you!
Dan
-----Original Message-----
From: Ian
Sent: Friday, April 03, 2009 2:48 PM
To: Boris; Paulytron; Lee; AnotherFellowPaul; AlexY; GregH; _Global_Architecture;
Subject: RE: Oracle and HP together again.
I forgot to mention the other (very obvious) alternative, Boris:
http://lolcode.com/
-----Original Message-----
From: Ian
Sent: Friday, April 03, 2009 2:48 PM
To: Boris; Paulytron; Lee; AnotherFellowPaul; AlexY; GregH; _Global_Architecture;
Subject: RE: Oracle and HP together again.
Of course not. We should kill it in favor of Erlang...duh. But if you must have a JVM, Scala and Clojure (yay, Lisp!) will do.
-----Original Message-----
From: Boris
Sent: Fri 4/3/2009 2:45 PM
To: Ian; Paulytron; Lee; AnotherFellowPaul; AlexY; GregH; _Global_Architecture;
Subject: RE: Oracle and HP together again.
While, we are on the roll,
Should we kill java as well in favor of say ... Scala?
-----Original Message-----
From: Paulytron
Sent: Friday, April 03, 2009 2:43 PM
To: Ian; Boris; Lee; AnotherFellowPaul; AlexY; GregH; _Global_Architecture;
Subject: RE: Oracle and HP together again.
No need to apologize Ian. Solaris now is (at best, IBM may just kill it) the next AIX, which is to say marginal++.
If Sun had lived to thrive, it might have been different, what with hardcore Linux guys like Jason Perlow extolling the advantages of Solaris innovations like DTrace, Containers and ZFS. And with efforts to bring the OpenSolaris userland in line with the Linux userland so Linux-trained devs and admins can actually be productive. Yeah I sound like a Sun shill but I don't care. I think it's good stuff is all. Sad to waste it.
But as I said that's all moot now. I'll just counter with the *only* significant enterprise I know that runs BSD or OS X is Yahoo (BSD). Otherwise those two are just as marginal server side IMHO.
So Linux it is (even IBM recognizes this). Right Boris: long live Linux.
Which makes me think of something: if IBM releases the IP for stuff like ZFS (only semi-opened by Sun) into open source (not unlikely) Linux becomes stronger. Apple purportedly has a port of ZFS running on OS X as well FWIW (not that this is relevant server side at this point in time).
-Pauly
-----Original Message-----
From: Ian
Sent: Friday, April 03, 2009 2:40 PM
To: Boris; Paulytron; Lee; AnotherFellowPaul; AlexY; GregH; _Global_Architecture;
Subject: RE: Oracle and HP together again.
I think you're forgetting the BSDs, notably FreeBSD and OS X (with the later playing a larger role in the Java picture, but the former on the server-side)...
...and I've always seen Solaris as a slowly sinking ship. Sorry Pauly. HP-UX, Irix or AIX, anyone?
-----Original Message-----
From: Boris
Sent: Fri 4/3/2009 2:38 PM
To: Paulytron; Lee; AnotherFellowPaul; AlexY; GregH; _Global_Architecture;
Subject: RE: Oracle and HP together again.
Well,
Despite of all IBM is the largest java and java tools shop today. As for Solaris, I think it is gone. Long live Linux and windows
___________________________
From: Paulytron
Sent: Friday, April 03, 2009 2:35 PM
To: Lee; AnotherFellowPaul; AlexY; GregH; _Global_Architecture;
Subject: RE: Oracle and HP together again.
"If this happens, it would be a tectonic change in the computing landscape."
I agree. Java and Solaris are (still) key enterprise platforms now. The JVM (but which one!) will remain so going forward under IBM. Solaris? Unclear.
-Pauly
________________________________
From: Lee
Sent: Friday, April 03, 2009 2:27 PM
To: AnotherFellowPaul; AlexY; Paulytron; GregH; _Global_Architecture;
Subject: RE: Oracle and HP together again.
Usually, the buyer doesn't want the news to leak because it jacks up the price in the marketplace. (i.e. IBM would have to pay more to acquire them) From a shareholder perspective, if anyone would want it leaked, it would be Sun.
If this happens, it would be a tectonic change in the computing landscape.
Regards,
/lee
_________________________
From: AnotherFellowPaul
Sent: Friday, April 03, 2009 2:23 PM
To: AlexY; Paulytron; GregH; _Global_Architecture;
Subject: RE: Oracle and HP together again.
Just talked with a relative who has a non-engineering job at Sun. He's planning on not having a job in 6 months. So that's one perspective from which the IBM-Sun deal is far from dead or stuck. My sense from him is that the initial news was an early leak (perhaps intentional by IBM).
___________________________
From: AlexY
Sent: Friday, April 03, 2009 12:02 PM
To: Paulytron; GregH; _Global_Architecture;
Subject: RE: Oracle and HP together again.
Does not it looks like IBM - Sun deal stuck? Or dead? No new announcements often hints on that possibility. Or am I missing something here?
Well, good for people who understand hardware. I don't value Sun for hardware, and I see the infamous Schwartz with his unbearable attachment to hardware only as a complete idiot and coward and liar. Natural liquidator. But Sun's software innovations and achievements - that's a pity to lose.
------------------------
From: Paulytron
Sent: Friday, April 03, 2009 11:49 AM
To: GregH; _Global_Architecture;
Subject: RE: Oracle and HP together again.
The Sun set at Oracle as soon as the market for Sparc CPU set, about 5 years ago.
As soon as x64/x86 became ascendant for RDBMS workloads, Linux (specifically RHEL) became their #1 target. To the point that Oracle forked RHEL to make their own Oracle Enterprise Linux (OEL). Interestingly, because the installed customer base of unforked RHEL from Red Hat exceeds that of OEL, Oracle still releases and patches first on RHEL I'm led to believe.
Returning to Sun, it's all a little unfortunate since Solaris on (any) x64/x86 box has some true performance and enterprise management advantages compared to any Linux. And Sun's own x64/x86 boxen also have some superior engineering features compared to Dell or HP or anyone else you can name. But Sun shot themselves in the foot when they publicly announced they were killing Solaris-on-Intel I guess it was 4 years or so ago. When they changed their mind a few months later when Schwartz came in it was too late: the perception of Sun as Sparc only was cemented in the marketplace (and even here at REDACTED: ask people's - even technical personnel - perceptions of Sun and you will invariably hear about slow and expensive Sparc CPUs boxes).
But that's all pretty much mooted by the reality of Oracle-on-Linux. And probably permanently mooted by IBM purchasing Sun, depending on whether IBM kills Solaris or AIX. And since IBM pushes DB2, it probably doesn't matter in any case.
Getting back to this HP announcement, the boxen are now commodities. The box in question is running OEL and has eight Intel x64 cores. HP as far as I can tell adds little differentiable value. It could be anyone's 8 x64 core box with redundant power supplies and dual InfiniBand, no?
To conclude: yes everyone knows I'm a Sun fanboy. Can't help it, I have a bias to good engineering despite their often clueless and questionable business practices, whose day of reckoning is now. And I've always balanced this by saying I own no Sun stock. I used to add "fortunately" to that statement, but right now I wish I had bought some at $4 and change a few months ago (or less right after the dotcom bust): IBM is paying more than twice that.
-Pauly
________________________________
From: GregH
Sent: Friday, April 03, 2009 10:53 AM
To: _Global_Architecture
Subject: Oracle and HP together again.
Here's a new Oracle/HP offering, and the rumors are looking like they maybe true. Has the Sun set at Oracle?
This product looks very interesting for the Data Warehouse option. I can see potential for this at REDACTED. The cost would be the next obvious question.
http://www.oracle.com/database/exadata.html
Regards,
GregH
Saturday, February 28, 2009
Freedom to Network
I really like some of his turns of phrase:
"They have created a Byzantine system of complex billing systems that have sucked hundreds of billions of dollars out of the economy in billable events"
and
"They have given us a funding model which is purposefully designed to create scarcity and have made us pay for very expensive redundant facilities just to have a physical embodiments of the accounting abstractions."
Of course "purposefully designed scarcity" comes close to being a best practice in the "free market" economy, not just telecom. Still it's good to think about all of this in structural terms instead of "good" versus "evil".
Tuesday, February 3, 2009
Super bowl as metaphor for population trends?
In fact the author argues for continued migration based on a (interestingly visualized) correlation between average January temperatures and population growth for US cities (since the last census I take it). So the title's a little misleading (and a false promise of hope for this Chicagoan).
However (and interestingly for a liberalish paper based in a hyperconurbanized Northeastern city and opinion capital), it's only in the comments that limits to Sun Belt growth are correctly discussed (water availability in the Cadillac Desert mainly, but also steeper growth in energy prices that make the essential air conditioning and resulting auto-centric sprawl - let's just say not quite sustainable).
But there's balance in the comments too: one mentions the fact that heating a building from say 20F to 68F consumes more BTUs than cooling one from 98F to 72F (this was pointed out rather effectively by Wired magazine in 2006). Another commentor points out that established Pittsburgh is culturally less open to newcomers than new Phoenix (Dunno. Maybe. It's plausible I suppose).
And in the "what gives" department, another commentor claims that the Pittsburgh shrinkage figures only consider the city of Pittsburgh itself, not the metro area, and implies that this was not consistently applied. I would agree that comparing MSA growth may be more meaningful, especially when one of the types being compared is a newer sprawl which tends toward annexation models for growth.
Still the author gives a very nice industrial-geographic historical analysis of the rise and decline of the industrial Great Lakes and Ohio Valley cities that has a transportation cost basis. As a transportation and logistics geek, that stuff rings true and fine.
Holographic entanglement reprise
Thursday, January 22, 2009
Holographic mysticism
For myself, I'm - as always it seems - somewhere between Frankston's hyper-rationalist perspective and what he calls the "false awe" of mystical interpretation of recent scientific discoveries (starting I suppose with quantum mechanics).
And of course it's fun to read serious discourse - regardless of one's own position on the rationalist-mysticism spectrum - that reminds one of "I Think We're All Bozos On this Bus"...
Sunday, January 11, 2009
IT's carbon footprint
Then again, Google claims that 7g CO2 volume per search "is *many* times too high". It would be good if the assumptions and methodology for these claims are also published. But then again, I'm not sure that "grams of CO2 emitted per search" on the server side is necessarily the right way to evaluate this.
The (not insignificant) carbon footprint of large data centers was covered (somewhat misleadingly I think and again using Google as the example) in March last year by Ginger Strand in Harper's Magazine, which prompted me to respond with a letter to the editor (never published in the magazine due to I'm going to guess my tendency to outlandish word count that I just can't seem to suppress).
Here's that text (I think it's once again relevant given today's meme):
It's important to realize that the move to an information- and information-services-based economy isn't as environmentally benign relative to "heavy industry" as is commonly conceived, and Ginger Strand performs an important service in pointing this out in "Keyword: Evil".
But I do feel compelled to point out that some of the numbers used to quantify the impact of data centers like Google's could be a little misleading and ought to be put in perspective. In particular the notion of how much economically useful work is performed for every watt consumed is an important one when impact is assessed and weighed against societal benefit. After all, no human activity regardless of economic or political organization is without some impact on the environment.
I would suggest that a more meaningful metric isn't that - as the author puts it - "thousands of servers" spring into action on each search query, or that "tens of billions of CPU cycles" are allocated to that task. Since "frivolous" or "wasteful" energy consumption is the putative consequence here (and presumably in contrast to other economic sectors or even individual lifestyles taken in aggregate such as watching television or commuting long distances to work), a more useful metric would be to examine total data center CPU-seconds per useful task divided by idle CPU-seconds where a CPU is doing nothing and waiting for something to do. Okay, maybe you can argue that the "American Idle" - sic - query example isn't a "useful task", but never mind that much more subjective angle for now...
Given this so-called "utilization" metric, best practices for data center software and infrastructure engineering practice are intended to optimize things so that the CPUs and other hardware consuming power are utilized as much as possible. This means that these CPUs don't sit idle very long, and can be quickly and efficiently reallocated from sub-second to sub-second as they complete their (subdivided) work for any given search query.
It is true however that these data centers are sized to peak loads, which are no doubt significantly higher than the average ones and so this reduces utilization. But even at non-peak periods these idle CPUs can do something else (like serving Gmail or Google Docs or YouTube, which don't in general share the same peaks) or can be put into a lower-power ready-to-use standby state.
Such strategies for energy consumption management and reduction - which I'd expect is taken into account in the quoted 103 megawatt figure - should also be contrasted with what happens on the receiving end of these queries. Similar CPUs in end-user PCs (and home routers and other microprocessor-intensive accessories) essentially sit idle most of the time waiting for the consumer to click the next web link (or even sit down at the keyboard). Multiply this times the hundreds of millions of internet-connected PCs in North America alone (even when in low-power standby mode which can run around 5 watts) and the data center power consumption pales in comparison.
To conclude, it's absolutely correct to point out that data centers such as Google's have a significant energy consumption impact, on par with aluminum smelting, which is traditionally considered to be a "heavy-industrial" benchmark for point electricity consumption. And it's also worthwhile to point out how an unregulated global economy with sharp differentials between states and countries in terms of environmental protection or rob-Peter-to-pay-Paul subsidization provides incentives to chase the lowest cost and often environmentally harmful arrangements. But to mention this without contrast to other aspects of environmentally harmful energy waste both within the information services sector as well as outside of it is somewhat misleading and might imply that merely eliminating the concentration in providers of information services would significantly reduce total energy consumption and waste, which I don't think would be the case at all.