5

I'm trying to come up with a good approach to creating a re-usable development environment such that it doesn't take a couple days to re-build a machine if it starts to sputter and to be able to onboard new developers faster. Reading about Quora's development setup made me consider alternatives to the old development environment build delay. For a .NET/Windows shop though, how are you solving this problem?

  • Local virtual machine on your desktop/laptop that you can share with the other members of the team?
  • A dedicated server (physical or virtual) that all developers remote desktop into and that can be easily backed up. (obviously requires a network connection, so there's a downside)?
  • An instance in the cloud (like Quora)?
RyanW
  • 829
  • 1
  • 6
  • 10

3 Answers3

2

I've been using Vagrant to help me set up VMs - Vagrant helps you set up a machine based on a essentially template VM, then loads customizations, for your particular project, that you set up with Puppet or Chef.

I've used this to create a standard VM for my team, loaded with all the software our project depends on.

And Vagrant makes it easy to blow a machine completely away and start over, if something oddball happens to a machine. For example, I had Linux giving me a disk error on a virtual machine... I "solved" the problem by blowing the VM away and having Vagrant rebuild it.

RyanWilcox
  • 514
  • 4
  • 6
  • Looks interesting. I'm not sure how helpful is for Windows guests, and looks like Windows hosts requires some healthy workarounds. But, I really like the approach and will try it out for some of my personal projects where I use Linux more. – RyanW Jun 19 '11 at 00:25
  • @RyanW, do you have a link to explanations of problems on Windows hosts that need those "unhealthy workarounds"? – Jodes Aug 13 '15 at 08:24
1

You definitely need to look at some kind of virtualization. Being able to instantly create a new copy of your default setup is amazingly powerful. And not just being able to replace the environment, but to have multiple different environments that you can run testing on, all without having to commit for physical hardware? Very very nice.

The thing with virtualization is that you can create your environment, and then ship copies all over the place. Host a "main" environment on the server, and still give everyone a copy for their desktops. It's nice. It adds a ton of redundancy, and allows you to reset to a "default" environment at any time.

Satanicpuppy
  • 6,210
  • 24
  • 28
  • I like the hybrid idea of local and server, which virtualization product are you using? I'm trying out VirtualBox now, but we use VMWare for most of our servers, so I should probably just stick with that for the desktop too? – RyanW Jun 17 '11 at 16:53
  • Note that "instantly create a new copy of your default setup" != VMWare. On our local setup, with the free version of ESX on a rackmount server with a little RAID array, it takes hours to make a copy of an image. Client sites with better setups still take tens of minutes. QEMU, on the other hand, can do it instantly, because it uses copy-on-write images. – Tom Anderson Jun 17 '11 at 16:57
  • My experience with VM-based development has not been as good as i'd hoped. Partly because it's always been on VMWare, which i think is a bit inflexible, but also because the infrastructure teams have still thought in terms of 100% provisioning of permanent instances. So, on my current project, our team has 30 VMs, each of which lives for years. Getting more, or chucking one and replacing it with a fresh one, means persuading the infrastructure team you need it. I honestly don't understand why they do this rather than seeing VMs as disposable, short-lived, constructs, but there you go. – Tom Anderson Jun 17 '11 at 17:00
1

Starting about fifteen years ago, for each project, I've developed a checklist that runs like this:

  1. Wipe the hard drive and install --- version of --- OS from the original disks.
  2. Apply ---- updates to the OS.
  3. Install ---- compiler.
  4. Apply ---- updates to the compiler.
  5. Load ---- other software, if needed.
  6. Install ---- version of ---- source control system.
  7. Pull the sources for the project from the source control server using --- parameters.
  8. Do --- to build the project.

It takes a few hours, but most of it can run unattended. Back when I had a company with employees, we did all our release builds that way.

An alternative is to ghost a disk image from step 6, and use that to skip those steps - but that assumes you're using an OS install that isn't locked to the machine.

This has worked like a champ on Windows, Mac OS 8/9/X, and Linux. One of my clients recently had one of their engineers build a deliverable to them, and he was amazed at how simple and error-free it was.

Bob Murphy
  • 16,028
  • 3
  • 51
  • 77
  • Thanks Bob, the approach you have is the general one I'm talking about, and while "It takes a few hours" is better than taking a day or more, I'm looking to do better than that speed wise. I would be interested to hear more the "unattended" aspect though. I agree about the ghosting/image, it can be tricky if the image has hardware dependencies. – RyanW Jun 19 '11 at 00:20
  • @RyanW: By "unattended", I mean most of the time is spent waiting for installers to run; someone just needs to look occasionally to see if a long step has finished and they need to do a minute or two of clicking and typing. If your checklist is thorough and non-technical enough, you can have an admin or intern do this while going about other duties. I do it myself while programming by using a kitchen timer to remind me when to see if the system needs attention. – Bob Murphy Jun 19 '11 at 20:32
  • +1, a simple checklist (plus some individual readme files for individual tools) works definitely tool agnostic and OS agnostic. And given you have do this only once per every year per machine (at maximum), it is usually not necessary to automate the process beyond this. – Doc Brown Aug 23 '23 at 03:55