17 March 2011

As promised...

Coffee! It's totally not off-topic, read the subtitle :)

As most software and creative professionals know, coffee is an important technology for boosting mental acuity and maintaining peak on-the-job performance (read: minimum necessary sanity). But did you also know that coffee can be a damn tasty beverage? It’s true. All you need is the appropriate amount of disrespect for the mainstream coffee industry and a desire to enjoy a better beverage. So read on, and learn the secrets to great coffee.

First things first. Mainstream coffee sucks, and specialty coffee mostly sucks. Mainstream coffee is primarily stale, low-quality, high-yield beans, many times cheap robustas, foisted on a largely unknowing public in supermarkets nationwide. Specialty coffee isn’t so much coffee as it is flavorings, sweeteners, and milk; what coffee is sold is often neither “special” nor properly prepared – it’s usually over-roasted to serve as a background for sweet flavorings. A few specialty coffee purveyors, however, do sell good coffee, and I’ll show you how to find them, but most are happy to sell you stale beans whose dead taste is hidden behind raspberry and caramel syrups. Buyer beware.

Nevertheless, good coffee is good – great even – all by itself. It’s also dirt cheap and easy to make. Therefore, don’t settle for a cup of crappy coffee: make a cup of the good stuff for yourself.

The coffee quick course

If you follow these three guidelines and do nothing more, you will enjoy coffee better than you can find in most specialty coffee shops:
  1. Buy only whole-bean coffee roasted within the last few days.
  2. Grind it fresh, just before brewing.
  3. Brew it in a French press or a pour-over filter using fresh water, off the boil.
The first two guidelines strike at the nemesis of good coffee – staleness. Stale coffee is dead coffee. There is no way to get a good cup from it.

Sadly, most of the coffee you buy in stores is stale before you get it home. While green (un-roasted) coffee beans can stay fresh for 2 years, roasted coffee goes stale in under 2 weeks, and ground coffee goes stale in a few short hours because of the immense surface area that grinding exposes to the air. Special “freshness preserving” packaging doesn’t help much either; it’s mainly a marketing gimmick.

The only reliable way to get fresh coffee is to know when it was roasted. Therefore, when you buy coffee, buy it from a purveyor who can tell you when it was roasted. If a coffee purveyor can’t or won’t tell you when their coffees were roasted, find another purveyor. And when you buy your coffee, buy it whole bean. Store it away from heat and light (but not in the refrigerator). Use it before it goes stale. If it goes stale, throw it away and get fresh beans.

Also, get a grinder. An inexpensive ($15) blade grinder (“whirly-bird”) is sufficient for making drip coffee and lets you grind just before brewing, which is the key to avoiding staleness. At this price, there is no reason to suffer stale, pre-ground coffee. If you want to buy a better grinder, that’s fine, but don’t think you have to spend a lot of money to enjoy fresh coffee.

The third guideline addresses another common flavor-denial attack: Low-temperature brewing. Most drip coffee makers brew at a temperature too low for proper flavor extraction. The most frequent explanation that I’ve heard for this sad yet pervasive flaw is that “really hot” coffee is a lawsuit waiting to happen, and thus manufacturers have lowered brewing temperatures accordingly. Whatever the reason, the effect is a cup of lifeless coffee.

So what is the right temperature? Off the boil works well. Put a kettle of freshly drawn, cold water on the stove. When it boils, take it off the heat, wait a minute or so, and slowly pour it over your freshly ground coffee. If you’re an experimenter, a $10 instant-read thermometer is all you need to “dial in” the optimal temperatures for your coffees and your taste-buds.

Since you’ll be using a “pour over” technique, you’ll need a pour-over brewing apparatus – either a French press or a $5 pour-over filter holder, found in most supermarkets. Use the French press if you enjoy the stronger flavors of unfiltered coffee. Use the filter holder if you prefer the convenience of a filter, which makes clean-up easy. Both are small enough to take to work, and the filter holders are cheap enough to leave there.

And that’s how you make great coffee. If you think that’s too much effort, at least you can use your new knowledge to find coffee shops that use fresh beans, grind them just before brewing, and brew them properly (most commercial brewers do use proper temperatures, thank goodness).

Oh, there’s more...

If you follow the advice above, you will drink great coffee for the rest of your life. For some people, that’s enough. For other folks (like me), that’s just the beginning. It’s the first step toward a fun, inexpensive, and gastronomically rewarding hobby. Even if you don’t want to make coffee into your hobby, you do have the opportunity – right now – to give up bad coffee and start drinking the good stuff. Why not seize the day?

Home roasting

Roasting your own coffee is simple and provides three major benefits. First, you can buy your coffee green and store it for over a year. Second, you can roast your coffee as you need it, so you’ll always have fresh beans. Third, you can experiment with a wide variety of beans, blends, and roasts to enjoy coffee that you could never find in a store.

A further benefit is that green coffee is less expensive than roasted coffee. By home roasting you’ll not only have better coffee and more control but also more money in your pocket.

To roast your own coffee you will need two things: green beans and a roaster. The beans can be purchased online at places like Sweet Maria’s (where I get most of my beans) and locally from the better coffee shops in your neighborhood. A roaster can be had for as little as $5 – buy an old hot-air popcorn popper at a garage sale. That’s what many folks on alt.coffee use for their roasting. If you prefer a less adventurous solution, there are many home-use roasting machines now on the market in the $100–$300 price range. I use a $150 Hearthware Precision roaster, and it works well. Just drop in a scoop of beans, dial in the desired roast, and press a button.

Yes, it’s that easy. And, yes, the results are better than most pre-roasted coffees you can buy. Nothing smells as good as freshly roasted beans. Nothing tastes as good when brewed. Once you try home roasting for yourself, you will understand.


If you want to experience the concentrated essence of coffee, you must drink espresso. Good espresso. Unfortunately, practically none of the specialty coffee shops and chains in the United States knows how to prepare espresso properly. If you want good espresso, you’ll have to make it yourself (or take a trip to Italy).

Unlike the advice I provided earlier, which is simple and just plain works, making good espresso is difficult. Finding the right combination of beans, grind, packing, pressure, temperature, and exposure takes practice. It took me months of gradual refinement to learn how to make a truly good cup. After years, I’m still seeking the perfect cup.

Since the perfect cup of espresso is a never-ending quest, I can only point you in the right direction. The rest is up to you. Here is what I can tell you:
  • Plan on spending > $250 USD on a good pump machine. “Steam toys” aren’t capable of good espresso. Do your homework: Read what people who own the machines say in the consumer reviews of brewing equipment on on CoffeeGeek.com.
  • Plan on spending that amount again on a good grinder. Many people buy an expensive espresso machine but skimp on the grinder. Big mistake. Since grind is probably the single most important variable under your control, a grinder must be highly adjustable and produce a consistent grind, and that means high-quality burrs set in a rigid enclosure. These features don’t come cheap. When shopping for a grinder, again, check out the consumer reviews on CoffeeGeek before buying.
  • Read the alt.coffee wisdom on espresso, ristretto, crema, and tampers. It’s also a good idea in general to hang out on alt.coffee. I’ve learned most of what I know about coffee and espresso there.
Brew well and drink well, amigos.

Although coffee is commonly considered a utility beverage, it is an amazing drink when well prepared. Given its ubiquity in software and creative circles, it’s likely that you will be drinking a lot of it. So why not prepare it as it was meant to be? Why not enjoy a cup of truly good coffee? If you buy fresh, high-quality beans, grind them on the spot, and brew with hot water, you can’t go wrong. And if you decide to try home roasting or espresso, you will enter a whole new world of flavor and nuance. The rewards are worth the effort.

Whatever else you may do, please don’t let the mainstream coffee industries convince you that bad coffee is all there is. Good coffee is out there. Insist on the good stuff.

14 March 2011


Yet another completely off-topic post. Those might just have to become the norm :)

Anyways, this is pretty much what I think about every day:

08 March 2011

Hey Erick

Just while I'm at it:

That'd be Apple II on Mac OS 6 on Mac OS 7 on Windows NT on Mac OS X... All emulated. Not virtualized.

Total abuse of a PowerPC chip, but honestly they were gluttons for punishment...

[Edit: This picture is from years ago, if that wasn't clear -- it's Virtual PC on an old G4 running OSX 10.3, not Parallels or similar on a more modern machine. With hypervisors today you could nest a whole lot more than this, as long as you stayed virtualized rather than emulated (i.e. all host OS's running on the same architecture). Emulating across architectures tends to reduce speed by a literal order of magnitude -- SheepShaver, for example, runs at around 1/8 speed.]

07 March 2011

Pimp My Server

Or: Why Virtualization is Awesome but The Cloud is Not Necessarily Awesome

04 March 2011

Dell Studio 540 + HyperV = FAIL

I wanted to buy a computer that would be a virtualization behemoth. After looking in Best Buy’s site, I found the Del Studio 540 very decent: 8 GB of RAM, Quad Core, and 1 TB of hard drive: hardware specs made in heaven for a home-VM powerhouse.

I took the machine to my place and quickly wiped out the OS that was pre-installed. Installed Windows Server 2008 Enterprise, accessed Server Manager, tried to add Hyper-V role and (drumroll)…no cigar. I thought “no biggie just have to enable virtualization support on the BIOS”. Took a restart trip to the BIOS and the Virtualization option is nowhere to be found. My friend, Xavier recently purchased some Dell laptops and he could not find this option either.

I tried the next thing any rational human being would try – called Dell Tech Support. After battling with a plethora of options to reach a human being, I reached a person who could not understood what I was asking. Basically she told me “you bought it from Best Buy, you need contact Geek Squad…kthxbye.”

So I did. Geek Squad (ha!) told me, “you need to contact Dell.”

So I took the machine back to Best Buy and requested a refund, which they did without any issues.

This leaves with some questions Dell and others have yet to answer:

Why do computer manufactures completely disable the option to enable virtualization? What harm could possibly come from this?

Does anyone else find ironic that Macs can run Windows Server 2008 Hyper-V without a hitch and Dell computers can’t?

Does Dell really expect people to buy their machines when their support is nothing short of mediocre?

Until then, I will be purchasing a couple of mac minis to help me with my virtualization needs.

BTW, I found a utility for quickly checking if a machine can have VT enabled, the Intel® Processor Identification Utility. It is an MSI file, so you have to convince the people of Best Buy/Fry’s/Circuit City to let you install it. Like they told me when I was testing: “you break it – you buy it.”

02 March 2011


Swore I wasn't going to do a Big Round Number post, but here we are. 100 followers, w00t!

01 March 2011

msi.dll ordinal problems

Here’s a good reminder to everyone to try and keep their software updated. Last week I tried installing Microsoft Dynamics CRM 4.0 on a virtual machine, just so that our team can have a play with it and kept running into this error:
The ordinal 242 could not be located in the dynamic link library msi.dll
It came up on the trial version, so I went ahead and downloaded the full version to try and install that. Same problem; so the issue wasn’t down to my installation file. I scoured the Web, but no one seemed to have that issue; an issue not with CRM, but with the installer that was trying to deploy it for me.
Anyway, to cut a long story short, I noticed that the Windows 2003 installation I had on the VM didn’t have any service packs installed on it. I have no idea where I got this ancient version from, but the fact that it didn’t have any connection to the Internet meant I wasn’t too fussed about security patches, but in one of the service packs they must have rolled out some changes to msi.dll, because installing Windows 2003 Service Pack 2 cured the issue. Now, if only the error message had been more helpful I would have found the answer soon, but at least I managed to solve it before taking one of my colleague’s Adams golf drivers to my computer screen.
On the bright side, once I got the installer running, it identified what components I had missing on my installation (.Net Framework, XML Core Services, Application Error Reporting, yadda, yadda) and is happily installing all the prerequisites for me.

23 February 2011

Damn you, UEFI!

We have an IBM x3650 M2, that runs a specific business application, using Windows Server 2008 R2 installed in EFI mode. Now, requirements have changed and we need to virtualize this.
Unfortunately, SCVMM 2008 R2′s P2V crashes when run on this machine. disk2vhd can produce a proper VHD from an EFI/UEFI install of Windows Server 2008 R2, but there’s not way of getting it to boot in Hyper-V (i tried a myriad of ways, including several Linux tools that can convert GPT disks to MBR-style disks, got the Windows Boot Manager installed, but it still wouldn’t boot).
So. What now? I’m out of reasonable ideas. I have opened a Microsoft support case regarding the SCVMM 2008 R2 P2V crash on an EFI machine, but i’m not sure i’ll get a quick out of this. If anyone has any ideas on how to get this fixed, i’d be thankful for any replies.
If I ever get a solution that does not include reinstalling everything from scratch, I’ll of course post it.

21 February 2011

Putting on a show

Utterly non-tech related, but a friend of mine is putting on a show this week in Manhattan; figured I'd plug it for any of you who happen to be in the NYC area.

19 February 2011

IBM i Access 7.1 installation hangs indefinitively with a Windows Installer Coordinator window

If you’re trying to install IBM i Access 7.1 on a Windows Server 2008 R2 based Remote Desktop Session Host (RDS), formerly known as Terminal Server, you’ll most likely encounter this issue.
A window titled “Windows Installer Coordinator” will pop up behind the IBM i Access 7.1 Installer (hidden until you click on it in the task bar). This “Windows Installer Coordinator” will run indefinitively, without ever successfully installing the application.
Thanks to a helpful guy from IBM Software Support Austria, i now have a solution to this issue. It’s caused by a new feature in WS08R2 RDS.
It’s called Windows Installer RDS Compatibility. If this feature is enabled, IBM i Access 7.1 will not install successfully, and hang at the “Windows Installer Coordinator” window.
To successfully install IBM i Access 7.1 on a Windows Server 2008 R2 Remote Desktop Session host, set the following DWORD registry key to 0:
HKEY_LOCAL_MACHINE\Software\Policies\Microsoft\Windows NT\Terminal Services\TSAppSrv\TSMSI\Enable
It’s possible that not all keys exist – in my case, the TSAppSrv and TSMSI keys didn’t exist yet – you have to create them manually. After creating this key, you can rerun the installation – a reboot is not necessary.

17 February 2011



Just had this pointed out to me for the umpteenth time, thought I'd share:

The Chinese word for America is 美國, pronounced meiguo, a transliteration which means literally "beautiful country". This would seem to be overtly flattering; however, the first character 美 (mei), though it does mean beautiful, is a vertical combination of the characters 大 (da) and 羊 (yang), meaning, respectively, "large" and "sheep".

Fun, eh? :)

TMG 2010 seems to be still in Beta

Our apprentice is doing an a project for his final exams (IPA). For that, we’ve chosen to replace our current Exchange 2007 Edge with a Forefront TMG 2010 / Exchange 2010 Edge combination.
As the project progressed, we’ve found a few extremely irritating and hard-to-debug issues, which needed my involvement to figure out the root cause and get them resolved, without compromising the exam results.
Be aware that most of the debugging and research here was mostly done by our apprentice, not by myself.
There are several key issues with TMG, that we’ve noticed so far:
IP Blocklist Entries
If IP Blocklist Entries are present in Exchange 2010 Edge, enabling E-Mail Policy Integration will cause TMG to reject all further changes, with the following error message:
Windows Could not Start the "Microsoft Forefront TMG Managed Control" service on Local Computer
Error 0x80070057 : Parameter is incorrect
I’ve found this solution in the TechNet forums. You need to remove all IP Blocklist and Allow List Entries.
Extremely slow boot
Forefront TMG 2010 with Exchange 2010 and FPE 2010 installed will boot extremely slowly, requiring up to 30 Minutes to boot. This issue is caused by the coexistence with Exchange 2010.
Again, i’ve found a solution in the TechNet forums.
You need to set the service Microsoft Exchange Transport and Microsoft Forefront TMG Managed Control to Automatic (Delayed Start). This will reduce the boot time to about 3 minutes.
lsass.exe crashes when creating Edge subscriptions
The next issue we’ve noticed is that while the initial edge subscription worked, the second one didn’t. It crashed lsass.exe, which subsequently caused a bluescreen. Not a very nice experience.
Again, we’ve found a solution on the TechNet forums, and this is getting worse by the minute. The lsass.exe crash can be mitigated by removing all except one SSL certificate – not exactly a good approach since a TMG likely has multiple SSL certificates for publishing a variety of services. But it worked. Except that mailflow didn’t.
Outgoing Mailflow doesn’t work with TMG 2010
Of course, stuff wasn’t working yet. While incoming mailflow now worked flawlessly, outgoing mailflow didn’t – mails where stuck in the queue with “Primary Target IP Address responded with 421 Unable to establish connection”.
We’ve tried to look at this, but everything seemed alright – but we couldn’t modify any connectors on the Edge server – TMG prevented this, and thus we had no Verbose logging from the Receive Connectors. Changing the configuration in the Exchange Edge console resulted in the following error message:
Forefront TMG detected changes in Microsoft Exchange Server or Microsoft Forefront Protection configuration, and reapplied the e-mail policy configuration on server
So i’m not supposed to do that. The TMG console didn’t give me the option of enabling Verbose logging. We were stumped.
Luckily, further research showed that one could disable the integration between the Exchange Edge role and Forefront TMG – this was mentioned on this TechNet forums post.
After disabling this integration, i was able to allow Verbose logging. Which didn’t help at all, since the Exchange 2010 HT just wouldn’t show up in them, suspecting a deeper issue.
At that point, we’ve checked the receive connectors that were created by Forefront – and the internal Receive Connector didn’t allow Exchange Server Authentication. After setting that to enabled, we were finally able to send mail successfully using the Exchange Edge services.
Final words
Forefront TMG 2010 still seems to be in Beta. The integration with Exchange 2010 doesn’t work as nicely as it should. I hope these things get fixed soon with Hotfixes for TMG 2010. Until then, we’ve found workarounds for all of these issues.
I’m publishing this article as quickly as i can, because i’m most likely not the only one with these issues.

14 February 2011

Thinkpad T510

Since December 2008, i’ve used my ThinkPad W500 as my work laptop. We’ve bought this as part of a promotion package.
The W500 i had had a 15.4″ 1920×1200 Panel, which wasn’t too great. While the high fidelity was certainly nice, the screen was very, very dark. It could only be used indoors, and required you to darken the room on sunny days.
Today i’ve had the chance to upgrade from the W500 to a T510, which i did. So far, i’m very much impressed with the changes Lenovo has do to this device. The W500 is running Windows 7 Enterprise x64.
  • New controls for volume, microphone mute. Much easier to use than before
  • New bigger and multitouch capable touchpad. As i prefer the touchpad over the TrackPoint, this is something that helps me tremendously
  • Integrated Camera and eSATA connectivity
  • Improved connectivity layout
There’s only one thing that i don’t like very much right now – the redesigned keyboard. As part of my job i deal with IBM’s IBM i platform, which still makes use the Function keys – which have all been shifted to the right for one key. So i regularly press F3 instead of F4, but chances are i will get used to it.
There’s one thing that worked very well – moving my Windows installation from the W500 to the T510. I’ve disabled Bitlocker protection, removed the OCZ Vertex SSD from the W500, placed it into the T510, booted it up, Windows installed several new drivers. Then, i installed the Intel LAN drivers from an USB stick, rebooted once more and installed the rest of the necessary drivers from Lenovo’s driver matrix. The whole process was done in less than half an hour, and reenabling Bitlocker protection was a breeze.
Windows 7 automatically reactivated by contacting our KMS servers, and i’ve had to reactivate my Office 2010 Beta manually, which also worked flawlessly.
While this portability is great (and also existed with Vista), it’s something I was able todo with Linux back in 2004 (assuming of course that the kernel had the storage drivers you required).
I’ve been using ThinkPads exclusively since 2004 – my first ThinkPad was an R51 and my first new laptop (my first laptop was a Compaq Armada i’ve bought used for 50.- CHF). When Lenovo took over the brand, i wasn’t to sure what to think of it, but having gone through several iterations of ThinkPad devices now (R51, T60, W500 and now the T510) i can see that Lenovo is commited to provide further well built, high performance devices.
Both the T60 and the W500 are still in service, neither of them are broken. The T60 is used by my apprentice and around 3 or 4 years old. We’ve replaced the Mouse and Keyboard to mitigate the wear and tear of several 40 hour work weeks on the device, but aside from that it stills works great.

12 February 2011

My OCZ Vertex 120G is dying...

I currently have two SSDs – an OCZ Vertex 120GB bought before Intel priced it’s SSDs competitively (April 2009) and an Intel X25-M G2 160GB i bought at launch (September 2009). The OCZ Vertex is the one i use in my work laptop, and the Intel X25-M G2 is the one i use in my system at home. Both see extensive use, and both have always been used with Windows 7, which is TRIM-enabled.
The most important thing between the laptop and the desktop is that i’m using BitLocker on the laptop, which might have an influence on things. I’ve always been using BitLocker on the SSD, so it would seem strange that this is now suddenly an issue.
I’ve always been aggressive about SSD firmware updates, after a good backup. I’ve upgraded both the Vertex and the Intel drives to be TRIM capable as soon as the respective firmware was out.
Unfortunately, a few days after using the OCZ Vertex in my new laptop, it started to have serious hickups – during which no IO would take place (perfmon disk queue shooting up to 50). During this time, the HDD light on the laptop is not lit.
I’ve tried to make sure that this issue was related to the SSD, so i ran HDTune benchmark:

This looked bad. Further investigation showed that there was a new Firmware out – 1.5. I’ve upgrade to Firmware 1.5, which supposedly had a Garbage Collection and TRIM support. After upgrading to 1.5, the hickups became much worse – the laptop needed about an hour just to boot up.
After looking at and posting on the OCZ support forum, i was told that i’d need to wait for Garbage collection to kick in. I let my laptop sit for a night, during which it crashed and the subsequent reboot was stuck on a “No harddisk found” message from the BIOS. Things looked bleak.
Further replies on the OCZ support forum requested that i do a sanitary erase, which would reset the disk to pristine performance levels (and delete all the data on it).
Unfortunately, the machine was too slow to run a Windows Complete PC Backup (wasn’t finished after 4 hours in). Fortunately, all the important data on my laptop is backed up using the Client Protection of DPM 2010, meaning all i had to do was reinstall my apps and i’d be good to go.
After reinstalling Windows 7, i installed the most important apps and then reenabled BitLocker protection – during which the hickups started happening again. The laptop would sometimes hang for 20-30 seconds, and then continue on on it’s merry way.
At this point, i went to sleep and let the laptop idle at the boot selection screen, so that the garbage collection could do it’s magic.
And now here we are, 8 hours later. While the read performance using HD Tune is nowwhere near as bad as it was before the sanitary erase, the write performance is stil abysmal.

For Comparison, here’s my Intel X25-M G2:

What now? I think I will have to replace the drive. It’s the only choice i have left at this point.
If anyone has a better idea, give me a whirl.


On call for work this weekend, so no big updates. Thought I'd share this instead:

10 February 2011

Blog Tone

So earlier today I got mistaken for a spambot, which makes me cringe a little bit that I literally failed the Turing test... It was my intention starting off not to personalize my posts, but should I? Or would it just get in the way of the geekery?

You're the handful of people who actually read my thing, you decide!

Linux Guests on Hyper-V 2008 R2

Had a bit of a scare today -- my blog disappeared! Seems to be back now, so either it was a glitch or someone thought I was spamming. Gotta be a little less overzealous about linking, I guess -- consider me chastised!

Edit: just read an article spelling out the problem, which is worth a look if you have a minute.

So I’m still running a Linux box to run a legacy business app that’s about to be replaced, and runs a few legacy VPNs. Setup ages ago, when i didn’t have the experience i have today, the setup on the machine was a mess – originally installed using testing of what was-to-be Debian 3.1 with several custom packages (Postfix, Apache, OpenVPN, etc.), this has been overdue for some fixup work for quite some time.

As a disclaimer, i realize that Debian in any version isn’t a supported OS on Hyper-V R2 – i just want to tell of my experiences with this unsupported configuration.

The hardware, an aging IBM xSeries 306m with a Pentium 4 CPU wasn’t getting any younger and after a drive failure about half a year ago that lead to a system crash (No data loss though – it just crashed the machine, that’s Software RAID for you), it was finally time to modernize this.

The plan is to consolidate all our DMZ workloads (ISA, OCS Edge, XMPP Gateway, Exchange Edge) on Hyper-V 2008 R2 and doing the trickiest part first seemed like a good idea.

So i created a new VM using SCVMM 2008 R2, selected Other Linux 32bit as the guest OS, inserted a Debian 5.0 netboot CD and that’s where the problems already started. While the installation worked well in general, the Framebuffer used by the Debian installed is awfully slow. So it took me about half an hour just to get the install done (on a 5GB partition of the 80GB VHD).

After finishing the installation, i formatted the rest of the disk appropiately and then used rsync to transfer the machine contents over. A short bit after reconfiguring Grub, i could choose to boot either the transferred OS with it’s kernel, or the Debian 5 rescue system i installed alongside.

Booting the transferred system worked well enough, but the tulip driver wasn’t compiled into that (custom) kernel and building the module failed. So i read up a bit, and realized that the newest kernel ( shipped with experimental Hyper-V VMbus drivers, that allowed synthetic NICs to be used.

I tried to compile the kernel after chrooting into the old installation, but it failed because gcc was too old. Not to worry, i compiled it in the rescue system, but couldn’t install the dpkg that make-kpkg created. So i installed it manually, which worked pretty well.

One reboot later, i was back in business with the extremely verbose Hyper-V drivers cluttering up dmesg, but the Synthetic NICs showed up as seth0 – seth2. After quickly changing all the necessary configuration files, everything was working.

After a bit of more testing, i disconnected the physical machine from the network and plugged the VM into the production VLANs.

I tested everything thoroughly and didn’t find any issues. Sent out an information mail and continued on my merry way.

Half an hour later, i decided to do a quick systems check again – and i realized that the external interface (seth2 in this case) wasn’t working anymore. tcpdump showed no packets being received and other machines in the same VLANs didn’t see any answers to their ARP requests either. So i rebooted the VM, and everything was working again. No error messages of any kind, neither in dmesg nor in the system logs or on the Hyper-V host.

Hoping this was just a fluke, i waited until it happened again – which it did, roughly 10 minutes later. So i decided to skip on the synthetic devices and go with emulated NICs and the tulip driver.

Everything came back up, but i couldn’t ping any devices on the eth0 VLAN from the start, but the other two interfaces worked.

After a few more tries, i arrived at a configuration that has now been stable for 4 hours and 26 minutes, which sounds good so far. For this, i configured a single synthetic NIC that i used as a replacement for the non-working eth0 and three tulip NICs (of which the first was unused).

There are other things that also worry me:

Every reboot of the Linux machine created the following event log entry on the Hyper-V host:

'LINUX' was reset because an unrecoverable error occurred on a virtual processor that caused a triple fault. If the problem persists, contact Product Support. (Virtual machine ID [])

Loading the synthetic NIC drivers logs the following in the event log on the Hyper-V host:

Networking driver on 'LINUX' loaded but has a different version from the server. Server version 3.2 Client version 0.2 (Virtual machine ID []). The device will work, but this is an unsupported configuration. This means that technical support will not be provided until this problem is resolved. To fix this problem, upgrade the integration services. To upgrade, connect to the virtual machine and select Insert Integration Services Setup Disk from the Action menu.

Loading the synthetic NIC drivers also logs all this on the Linux side of things:

VMBUS_DRV: Vmbus initializing.... current log level 0x1f1f0006 (1f1f,6)
VMBUS: +++++++ Build Date=Feb 17 2010 12:37:00 +++++++
VMBUS: +++++++ Build Description=Version 2.0 +++++++
VMBUS: +++++++ Vmbus supported version = 13 +++++++
VMBUS: +++++++ Vmbus using SINT 2 +++++++
VMBUS: Windows hypervisor detected! Retrieving more info...
VMBUS: Vendor ID: Microsoft Hv
VMBUS: Interface ID: Hv#1
VMBUS: OS Build:7600-6.1-16-0.16485
VMBUS: Hypercall page VA=f80c9000, PA=0x36afe000
VMBUS_DRV: irq 0x5 vector 0x35
VMBUS: SynIC version: 1
VMBUS: Vmbus connected!!
VMBUS_DRV: generating uevent - VMBUS_DEVICE_CLASS_GUID={c5295816-f63a-4d5f-8d1a4daf999ca185}
VMBUS: Channel offer notification - child relid 1 monitor id 0 allocated 1, type {32412632-86cb-44a2-9b5c50d1417354f5} instance {00000000-0000-8899-0000000000000000}
hv_netvsc: module is from the staging directory, the quality is unknown, you have been warned.
NETVSC_DRV: Netvsc initializing....
VMBUS_DRV: child driver (f80dc570) registering - name netvsc
VMBUS: Channel offer notification - child relid 2 monitor id 255 allocated 0, type {cfa8b69e-5b4a-4cc0-b98b8ba1a1f3f95a} instance {58f75a6d-d949-4320-99e1a2a2576d581c}
VMBUS_DRV: generating uevent - VMBUS_DEVICE_CLASS_GUID={32412632-86cb-44a2-9b5c50d1417354f5}
VMBUS_DRV: child device (f73a8634) registered
VMBUS: Channel offer notification - child relid 9 monitor id 1 allocated 1, type {f8615163-df3e-46c5-913ff2d2f965ed0e} instance {9d44a66e-4b09-41d5-80d807ae24bf537d}
VMBUS_DRV: generating uevent - VMBUS_DEVICE_CLASS_GUID={cfa8b69e-5b4a-4cc0-b98b8ba1a1f3f95a}
VMBUS_DRV: child device (f73a5a34) registered
VMBUS: Channel offer notification - child relid 1 monitor id 0 allocated 1, type {32412632-86cb-44a2-9b5c50d1417354f5} instance {00000000-0000-8899-0000000000000000}
VMBUS_DRV: generating uevent - VMBUS_DEVICE_CLASS_GUID={f8615163-df3e-46c5-913ff2d2f965ed0e}
VMBUS_DRV: device object (f73a5ee4) set to driver object (f80dc5c0)
VMBUS: Channel offer notification - child relid 2 monitor id 255 allocated 0, type {cfa8b69e-5b4a-4cc0-b98b8ba1a1f3f95a} instance {58f75a6d-d949-4320-99e1a2a2576d581c}
VMBUS: Channel offer notification - child relid 9 monitor id 1 allocated 1, type {f8615163-df3e-46c5-913ff2d2f965ed0e} instance {9d44a66e-4b09-41d5-80d807ae24bf537d}
VMBUS: channel f73aac00 open success!!
NETVSC: *** NetVSC channel opened successfully! ***
NETVSC: Sending NvspMessageTypeInit...
NETVSC: NvspMessageTypeInit status(1) max mdl chain (34)
NETVSC: Sending NvspMessage1TypeSendNdisVersion...
NETVSC: Establishing receive buffer's GPADL...
NETVSC: Sending NvspMessage1TypeSendReceiveBuffer...
NETVSC: Receive sections info (count 1, offset 0, endoffset 1048000, suballoc size 1600, num suballocs 655)
NETVSC: Establishing send buffer's GPADL...
NETVSC: Sending NvspMessage1TypeSendSendBuffer...
NETVSC: *** NetVSC channel handshake result - 0 ***
NETVSC: Device 0xf6552e80 mac addr 00155d031a09
NETVSC: Device 0xf6552e80 link state up
VMBUS_DRV: child device (f73a5e34) registered

So, it works. But not without troubles. I’ve still got the physical machine to fall back on, but i sure hope Microsoft will get this to work better.

These issues are the reason why i decided to deploy my private server using ESXi instead of Hyper-V – because i need both Linux and Windows guests.

09 February 2011

SBS 2008 Advice

Take all this with a grain of salt, as some observations may simply be my fault. Also, as times changes these things might change too.
  • Make sure to install Windows Server 2008 SP2 after installing SBS 2008. Some media may come with SP2 already preloaded. You can use the normal SP2 package that’s also used for Vista and the normal Server 2008
  • Do not install SBS rollup updates before completing the configuration wizard. This is extremely counter-intuitive, but is described on the Official SBS blog
  • Installing Exchange 2007 SP2 requires you to follow special considerations Here
  • Installing WSUS 3.0 SP2, which is needed to support Windows 7, is currently not recommended. I was able to do this without issues on my lab machines, but others have reported issues doing this on machines that were in production. If you’re deploying a new SBS server, this should probably be safe to go. But make sure to test functionality afterward.
  • Always use the answer file to deploy SBS 2008. This will make it possible to choose a custom domain name. Read my post about choosing your AD DNS namespace
  • Do whatever tasks you can do using the SBS console. Resist of using the normal administration tools as much as possible, as you can break SBS with them easily.
  • Ensure that the AV software you install is compatible with WS08 x64. Symantec Endpoint Protection Manager works well – Forefront Client Security on the other hand requires a seperate server running 32bit Windows for management. You may consider deploying FCS unmanaged in smaller environments, and configure FCS using the FCS ADM File
  • Use servers with the Xeon 5500 CPUs. Consider using an E5530 or faster CPU. Using two CPUs (for a total of 16 virtual and 8 physical cores) makes little sense.
  • Buy enough memory. Lots of it. Really. I mean it. You’ll need lots and lots of memory. I would consider 12GB to bare minimum. In a 3x4GB configuration which makes the most sense for the Xeon 5500 setups, this is quite cheap. Consider more memory if you intend to run SQL Server as, consider bumping the memory to 24GB. Remember that you can only use the first 8 slots in a single socket machine.
  • Buy enough disks. A good starting layout is 8x147GB 2.5″ disks. Use a RAID 1 for the OS, another RAID1 for Exchange and Sharepoint, and a RAID10 for Data and WSUS. This is all up for debate of course, and it might make sense to consider other disk layouts.
If you have any additions, think i’m wrong somewhere just send in a comment.

08 February 2011

SonicWALL NSA 2400 – SMB Firewall Appliance

Just started using one of these, thought I'd share my experiences.

So far, we have mostly used ZyXEL’s ZyWALL products to serve our Small Business customers, however the ZyWALL Line wasn’t always very satisfying when moving to the upper end of the Small Business spectrum. Thus, we had a look at SonicWALL – i’ve been using them for quite some time.
There are a few things about SonicWALL that is different about people which are used to the low-end market (like the ZyXEL products).
You’ll need to purchase Software Maintenance in order to be able to download newer Firmware versions
The old SonicWall Hardware Generations (TZ / PRO) have “Standard” and “Enhanced” Firmware images – the Standard versions are stripped down and less flexible – the NSA Models just have “Enhanced”
Registration on MySonicWall is mandatory

One of the things fixed with the release of SonicOS 5.0 was the graphical user interface – the new GUI is completely revamped, and looks like something that belongs to the Year 2008. Other improvements include a completely redesigned hardware, that uses multi-core CPUs to provide real-time traffic analysis.
The NSA Series ship with basic Firewall/VPN features that are licensed as part of the base hardware. Additional features like Anti-Virus Scanning, Content Filtering, Anti-Spam, Intrusion Detection and Prevention all require extra expenses. This model is similar to what other UTM appliances like the ZyWALL 5 UTM uses.
SonicWALL Global VPN Client is a IPsec compatible VPN client, that works pretty well. There is not 64bit Version yet, and it doesn’t work with other VPN Clients running on the same PC. If you do not want to use SonicWALLs GVC, the SonicWALL also offers the ability to use L2TP and your Operating Systems native VPN functionality. While L2TP connections are mostly unrestricted, the number of GVC Licenses can be pretty low (e.G. 10 for the NSA2400).
One of the main advantages over the ZyWALL Line of products is the object-based configuration, and the ability to have multiple, Gigabit interfaces on the hardware – the NSA 2400 offers 6 Gigabit interfaces with the ability to use 802.1q VLANs to create even more logical interfaces. Even the low-end NSA 2400 can offer quite a lot of throughput (I’ve measured up to 30 Megabyte / s), which is important if you have Servers deployed in your DMZ.
Other cool features include the “SonicPoint” Management, which is basically the same as Symbol’s or Cisco’s Lightweight Wireless Access Points. This is a very cool feature in Smaller Businesses that do not want to buy separate Hardware to maintain their Wireless Infrastructure.
You can even access Live Demo of the SonicWALL Web Interface to see for yourself.
Very flexible configuration
Streamlined GUI with useful features like Packet Capturing and self updating Log views
Lightweight VPN Client and the ability to use Standard L2TP
Lightweight Access Point Deployment using the NSA as a base
LDAP Integration, preconfigured for Active Directory
6 Gigabit Interfaces
High Performance
High price of Hardware (List: 2700 CHF)
High price of mandatory service contracts for Firmware updates (List: 1300 CHF for 3Y 7×24 and HW Advance Replacement)
High price of UTM features licenses (List: Starting at 1700 CHF for 3Y AS/AV/IPS)
Incomplete user authentication solution (based on an Agent using WMI to query logged on user instead of using secure Kerberos authentication)
No redundant PSU or Fans to compensate for high hardware price (the NSA 7500 has redundant Fan/PSU)

07 February 2011

70-652 - Windows Server Virtualization

I’m at the Digicomp testing center right now and waiting for my collegue to finish the exam too.
In General, my impression was that the exam was pretty solid but certainly “Enterprise Heavy” in focus. There were a lot of questions regarding appropriate configurations for failover clustering, and also several pieces of SCVMM 2008 (the latter though were never hard – anyone who has toyed with SCVMM and browsed through the main functionality should be able to answer them).
I’ve seen a few questions that weren’t worded 100% precisely, but that can always happen – the quality was generally high.
Other areas that were featured heavily:
  • Clusters (as mentioned above)
  • Snapshots – especially pay close attention on how Snapshots can be reverted, reused, etc. Snapshots can also be used in deployment scenarios
  • Integration between SCOM and SCVMM
  • Disk configuration – the available options for VHD files, their advantages and disadvantages, the usage of physical disks from the host and of course the use of iSCSI disks that are directly attached in the VM
  • Hardware requirements and configuration requirements when setting up Hyper-V – pay close attention on how you configure the Windows Bootloader, and what necessary steps need to be taken when enabling hardware assisted virtualization in the BIOS
  • Proper VM hardware configuration – remember which controllers in Hyper-V are bootable and which are not. Also, think about very old legacy applications that might have problems with newer CPU features available on modern CPUs and about the implications of running an OS that does not support synthetic hardware
  • Network configuration – pay close attention to bigger scenarios involving the cluster heartbeat link, iSCSI connections from the host, iSCSI connections from the VMs themselves, Quorum disks in cluster scenarios. Also, remember the difference between internal and private network interfaces
Did i pass? I’m not sure. There were many cluster questions, and i never had much contact with those since i primarily work with Small Business customers.
So if you intend to go at this exam, make sure you’ve toyed around with SCVMM (SCOM knowledge not necessary, just look up on how these two can be integrated). Also, make sure you’ve setup a Hyper-V cluster at least once. You can emulate an iSCSI SAN by using an open source appliance like FreeNAS that can export disks using iSCSI. None of the questions i’ve seen seemed “hard” to me, but i was guessing at a few because i didn’t know about the topic.
Good luck!
Outlook Anywhere / Outlook Autodiscovery on Windows 2008 still has some problems.
Read this most excellent post that has all the details.
Long story shorts: Modify the hosts file, remove the IPv6 localhost (::1) and then add hosts entries for your server. I would recommend against disabling IPv6 on the Exchange server, as this is probably not a recommended or supported configuration.
The root cause is that Outlook 2007 can’t contact a DC/Domain Controller using RPC over HTTP/Outlook Anywhere when used on Windows Server 2008.
Also note that NTLM Authentification for Outlook 2007/Outlook Anywhere is broken on Windows Server 2008.

06 February 2011

Almost a year

So in May, we migrated from our old company name and a Windows Server 2003 infrastructure to Windows Server 2008.

While configuring back then was very interesting (especially Exchange 2007) and finding vendors which supported their apps under WS08 wasn’t always easy, it worked out.

We’re running McAfee VirusScan Enterprise, which was supported WS08. Unfortunately, the ProtectionPilot Management App was not supported on WS08, which is why it’s running in an WS03 x32 VM. For Backup, we’ve used Symantec BackupExec 12 (since then, upgraded to 12.5).

I’ve been running six productive VMs in Hyper-V since May. The upgrades to the RTM version of Hyper-V ran flawlessly, and we’ve had zero production issues with those VMs. The VMs are a mix of WS03 x32 and WS08 x64.

Except one WS08 Core x32 Domain Controller, all WS08 machines are x64. Even the setting up an x64 print server for x32 clients was less of an issue than i initially thought.

The feature most applauded by our users is probably the TS Gateway.

We currently OCS 2007 in an (unsupported) VM, because we only use the IM functionality right now (the reason that VMs are unsupported is that voice heavily depends on timing, which can be icky in VMs). Our plan is to migrate to OCS 2007 R2 when it comes out, this time running on WS08 on native hardware, so we can start our internal VoIP rollout.

IBM has finally released Director 6.1, which supports running on WS08 x64.

For Active Directory, i run two WS 08 Core DCs, one x64 (on newer hardware) and one x32 (on rather old hardware). We also have an RODC in our branch office. BackupExec has it’s fair share of troubles running on RODCs and so do other apps that depend on SQL Server, like WSUS. So keep this in mind if you want to deploy branch offices – the single server approach worked with DCs, but it won’t with RODCs. Get two machines, one for the RODC, another one for the rest.

For branch office connectivity, we’ve always used DFS-N and DFS-R, which has continued to work flawlessly on WS08.

In our Edge environment, i’ve deployed an Exchange 2007 Edge Server, an OCS 2007 Edge Server and an ISA 2006 server. The latter two are still running on WS03, which i plan to upgrade as soon as it is possible.

I currently only have one unresolved issue, which is NTLM Authentication for Outlook Anywhere. UR4 should have resolved it, but i haven’t gotten around to test this.

As for the clients: We run three quarters Vista (yeah, yeah, i know), one quarter XP. The XP machines only remain because i don’t have any jurisdiction over them, there are no technical reasons why they shouldn’t get upgraded.

So, after this you will probably assume that i got paid to write this. Well, i do work for a Microsoft Partner, so the Software cost associated with upgrading to WS08 was rather low, as we have Software Assurances for our Volume Licenses and we also get many internal use licenses through the MSPP.

The experience of deploying and running a production system has been a tremendous help for me to get acquainted with WS08 as a platform. I’m currently in the process of deploying my first SBS08 into production, about which i’ll write as soon as that project is done.

Still, i honestly believe that WS08 is ready to deployed. Not anywhere, mind you. Application Support is still an issue, and especially ERP vendors are slow to catch up (not us, though – we supported WS08 TS as platform from the start).

So, what do you think about WS08? Looked at it? Tried it? Running it?

05 February 2011

Stuff Costs Money

Sorry for the late post -- I'm going to try to do this every morning, but I had to catch an early flight to NC for a family occasion. Air travel's put me in a mood, so this is more of a rant than something constructive.

You get what you pay for – at least in the hardware world. If you do not have enough information, it is always difficult to say what you should buy, and how you should compare prices.

The problem is that it is sometimes hard for a customer to understand why there are so many ways to fulfill one single goal, and why the different solutions might have gigantic price differences. If you’re a sales type of guy, it’s important to make sure that you offer the right components, and make sure that the competition does the same.

I need 1 TB of Storage.

Okay, this sounds simple enough. You can now make an offer for a simple 1TB Disk and an USB2 disk case. Totals at about 600 CHF including MWST.

Does this solve the customers problem? Does it help him? Is it what the customer wants?
Maybe. You don’t know.

You can generally leave a better impression if you ask the customer what he needs the storage for, and also think about secondary problems like backups.

I need 1TB of Storage for our ERP-Database running on one of our IBM x3650 servers. We have no idea on how we should back it up, though.

Now you already know a lot more, and know that the 600CHF solution with a single disk won’t even get close to the solution. You still don’t know enough to know everything (i.E. you could get 1TB with RAID10, but also with RAID5). You could also try to upsell to a SAN solution, though it wouldn’t be necessary. What you do have though is the possibility to upsell a backup solution – but there’s a catch here too.

ERP software is usually very important. Can the customer afford the degraded performance associated with an online backup? Or is there a need to use database mirroring and then backup the mirror? Problems like these are usually overlooked when shopping for hardware, and if you react right you have the chance to sell more hardware, sell services, and all this while helping the customer – you’re doing the right thing and making more money.

Real problems
I see this problem mostly with Windows Small Business Server 2003. Yes, you can get a fully working machine including licenses for 3000 CHF, but the question is if that’s what the customer really wants.

SBS is a product that makes many compromises in order to offer a highly competitive pricing, it even works against many Best Practices recommended by Microsoft. That doesn’t mean that SBS is a bad product, just that it’s a mixed bag. Selling two servers is usually impossible for Small Business customers, even though it would be best for Domain Controllers.

An SBS machine usually runs the whole company – it functions as a domain controller, file server, print server, Exchange server, and sometimes even ERP server or Internet gateway (i would recommend to seperate the latter two roles on seperate machines/appliances). If the SBS server is down, none of the information workers can do anything. That’s why you shouldn’t skimp on the SBS hardware. Hardware RAID, dual power supplies, brand components, updates, etc. are all important. Don’t buy whiteboxes or low end servers.

Why? You could buy an IBM System x3250, stick two 500GB disks and 4GB of RAM in it, and you have the same basic attributes – the problem is that such a machine is much slower IO-wise (you run Exchange and a Fileserver on this thing!), and has much worse reliability than better 2U servers like the IBM System x3650.

But if you’re running a bigger environment, go out and buy two x3250 and use them as Domain Controllers (and ONLY Domain Controllers).

There are many companies offering IT services to small businesses. While many of them have competent personnel that knows what they’re doing, sometimes they’re more sales than goal oriented. They will offer the wrong hardware for the job, and you’re the one who has to explain to the customer why your offer is more expensive than the one from your competition.

That’s why it’s important that even a sales rep understands what he’s selling.

04 February 2011

ACTUAL first post

Hokay, here goes nothing!

So Windows 7 RTM is out. So i’ve tried playing with XP Mode, which didn’t work for me on the RC version, and after a bit of debugging didn’t find the issue.

So, with a fresh newly installed laptop and the new release candidate of Windows XP mode, i gave it a whirl again. But it failed with the same sequence of completely intelligible error messages, namely “Integration features have been disabled” and the even more helpful “Parameter is incorrect”.

So i installed it on my desktop as well, where it worked without a hitch. The major difference between my desktop and my laptop is that the laptop is joined to the corporate domain and the desktop at home obviously not.

I dug a bit deeper into the event log, and drilled down to Microsoft\Windows\Virtual PC\Admin, where i found this error message:

Could not enable the Integration features for ‘Windows XP Mode’. The current mode is – 0. Last Channel start Value – 0x800700B7, Last Disconnect Reason – 0x300001B, Last Extended Disconnect Reason – 0×0, GHI State of the guest machine – 0×1

Now, this whole “disconnect” thing sounded strange until i remembered that Windows Virtual PC used RDP to deliver the screen – and at that point i thought about the RD Gateway server that’s being pushed by a GPO.

So for a quick test, i set the following key in the registry to zero:
HKEY_CURRENT_USER\Software\Policies\Microsoft\Windows NT\Terminal Services\UseProxy

And tried starting Virtual PC again. It worked! Setting the key back to 1 predictably led to the same error message.

So next i excluded Windows 7 users from this GPO using a simple WMI filter, which will be a temporary measure to mitigate this issue.

This seems to be a bug somewhere, as those settings shouldn’t break Virtual PC. I’m not sure where i should report this to, but i’ll have a look at that. At least now people with the same issue should get this solution through Google.

Inaugural Post!

Er... Test? Test?


Hiya. I'm Mike Yang, and this is my brand-spankin' new blog, where I hope to post ramblings, insights, and thoughts from my own humble brain and around the web. Enjoy!