Open Stack – Initial Installation and Configuration

Posted on 23. Mar, 2013 by .

0

I spent the last week of February 2013 hanging out with some of the smartest people I know at the Java Posse Roundup. It’s my favorite geek week of the year. I’ve been seven of the eight years that it’s been held and each year I come back full of inspiration and new ideas. This year was no exception. I spent the afternoon with Romain Pelisse learning about Puppet. In the course of that conversation Carl Quinn joined and the conversation morphed into a general discussion around DevOps and the “Cloud”. At the time Carl was the Engineering Tools Manager at Netflix (since then he’s announced that he’s leaving to join Riot Games as a Software Architect). It was quite interesting to hear about some of the approaches that Netflix uses to run their operations.

It’s a well known fact that Netflix is a huge customer of Amazon’s Web Service (AWS). Many (most?) of the Netflix applications are built to run in AWS. Applications can spin up dozens or hundreds of instances of themselves in the Amazon infrastructure. It’s very impressive. I’ve struggled with this approach and have tried to relate it to something that would work in our architecture. Given that we’re a multi-tiered Java Swing app that our clients use to manage large volumes of very proprietary and financially sensitive data, the public cloud is just not a good fit.

As I was mentioning this, Carl and Romain both mentioned private cloud solutions like Eucalyptus and OpenStack. In the course of the discussion I had a number of “ah ha” moments and within 60 minutes went from “this cloud technology does not apply to my world at all” to “I need to get a private cloud up and running as soon as I get back home”.

Use Cases

Co-incidentally, just before I left, a new server arrived. This server is intended to replace our seven year old “Development Tools” server. This server hosts our source code, code review tools and a few other related bits. Over the years, this server has suffered from a serious case of bit rot as new versions of our tools have been installed, and rarely do old ones get retired or deleted. The first ah-ha moment came when Carl mentioned the very common practice of spinning up virtual machines to host single services and they are only up for as long as they are needed. When new versions of the services are rolled out, new vms are spun up and the old ones are just deleted. It struck me that this would be a fantastic way to manage our dev servers. We could have a dedicated vm for Subversion, another one for Git, a third one for Crucible and so on. When we want to upgrade to a new version of any of those tools, it would be a simple case of spinning up a new vm, installing the latest version and then migrating the data.

The second use case for us comes in the form of our Jenkins Continuous Integration environment. We have an environment that we call our test-farm. It’s a collection of both physical machines and virtual machines that are used to run our test suites. These machines are pinned 24/7 and we’re always at capacity. Bringing a new Jenkins slave online is a bit of a chore as we need to install a variety of tools and technologies (Ant, Gradle, Java, Groovy etc.) into these slaves and keeping them all the same and up to date is a pain that nobody likes doing so it doesn’t get done very often. (Provisioning new slaves is a job for Puppet but that’s the topic of a future post). In the context of the private cloud, being able to spin up new Jenkins slaves easily would be a huge benefit and would get our test runtimes back under control.

Choosing OpenStack

When I got back I did a bit of research into private cloud implementations and settled on OpenStack. I applied the same criteria that I use whenever I’m looking at a new technology.

- How Active is the community?
- What resources are available (books, conferences, training etc.)
- Who is using the technology?
- What is the product road map?
- How closely does this technology align with my world?

Openstack was the best fit for us.

Installing OpenStack

Installing and configuring a private cloud solution is non trivial. Don’t let anyone tell you any different. The amount of technologies that are used under the covers is both very impressive and daunting. Openstack does a pretty good job of abstracting away much of it but it’s still there. To install Openstack from scratch involves a 20 page installation and configuration process. Fortunately there are some kind souls who have invested time and lots of effort to put together installation scripts that automate much (but not all) of the process. We used these scripts from StackGeek which were a life saver.

We started with a bare metal install of Ubuntu 12.04 and ran through the installation scripts. We did this probably four or five times, both in local virtual machines and directly on the server. We ran in to various issues along the way and as we learned something quite often we decided it was easier to “nuke the world” and start over. Much of our trials and tribulations was not with the scripts or Openstack in general but with the complexity around the networking requirements for openstack and our (my) lack of deep knowledge of Ubuntu networking configuration.

Configuring Openstack

The stackgeek scripts do a pretty good job of creating the configuration files that the various components of Openstack need. Much of it consists of providing passwords and things of that nature. The hardest part (for us), was getting the networking configuration figured out. Openstack can be configured in many different ways. As you can imagine, it’s conceivable to have many machines configured to manage dozens, hundreds or even thousands of virtual machines. This can be done in Openstack. Our initial foray into this world is not nearly that ambitious. We wanted a single machine that will run the entire stack of openstack.

Openstack uses a private network to manage the virtual machines. It also requires a virtual adapter for the bridge to work. The first trick was getting our network interfaces configured just right.

auto eth0
iface eth0 inet static
address 10.0.1.34
netmask 255.255.255.0
network 10.0.1.0
broadcast 10.0.1.255
gateway 10.0.1.254
auto eth2
iface eth2 inet manual
up ifconfig eth2 up

eth0 is a physical network connection set up with a staic ip address and is used to communicate to our network
eth2 is is an interface that Openstack will use for it’s private network.

Openstack uses a bunch of configuration files (stored in /etc/nova/), the one we spent a lot of time pouring over is nova.conf

Here is a subset of the configuration that we ended up with in nova.conf

## network specific settings
--network_manager=nova.network.manager.FlatDHCPManager
--public_interface=eth0
--flat_interface=eth2
--flat_network_bridge=br100
--fixed_range=10.99.0.0/24
--floating_range=10.0.1.193/26
--network_size=6
--flat_network_dhcp_start=10.99.0.2
--flat_injected=False
--force_dhcp_release
--iscsi_helper=tgtadm
--connection_type=libvirt
--root_helper=sudo nova-rootwrap
--verbose

Settings
- public_interface – The interface used to communicate with the outside world
- flat_interface – The interface to use for the private network
- flat_network_bridge – Bridge used when all openstack components are on the same box. Openstack will create this. Don’t manually create this in /etc/network/interfaces (like we tried many times)
- fixed_range – The IP address/mask that will be used by Openstack to allocate IP address on the internal network.
- floating_range – The IP address/mask that will be used by Openstack to allocate IP address on the external network. Can be assigned to running vms to expose them on the larger network.

It took us many hours of reading and experimenting to get these values right for our environment. Like most things, it makes sense once we got it sorted out but many brain cells were expended in the search for what these values needed to be and what they were used for.

Trouble with DHCP
We ran into another issue that took us an entire day to figure out was related to dhcp. Openstack uses dnsmasq to serve up dhcp addresses for the internal network. When a new virtual machine is spun up by Openstack it gets an internal IP address. We ran into a problem where we could see (in /var/log/syslog) the virtual machines requesting (dnsmasq-dhcp[2508]: DHCPACK(br100)) an ip address from the DHCP server but they would time out and not get one. It turns out that we needed to add a manual route to the ip table for the dhcp request to make it’s way to the local dhcp server. This post was instrumental in pointing us in the right direction.

iptables -A POSTROUTING -t mangle -p udp --dport 68 -j CHECKSUM --checksum-fill

Summary

After a little over a week, we were able to get an Openstack cloud up and running on our new server. We’ve just started configuring some ubuntu vms with our development services but so far, things look pretty good. We’ll spend a bit more time over the coming days and weeks before flipping this into our production environment but we’re really pumped with what we’ve seen so far. A big shout out to my colleagues Glen and Shawn for helping me work through this. A future post will talk about the vms we actually spin up and how we (hopefully) configure them with puppet.

Continue Reading

Goodbye Tracker. Hello Jira

Posted on 31. Jan, 2013 by .

0

19 Years and 85,000+ issues

I’m sitting watching what I hope will be the last migration of our bug tracking system to Jira. This has been a long, long time coming and I’m very happy to see the end of this project.

We’re replacing a PowerBuilder/Oracle solution that we wrote 19 (yes 19!) years ago. If 19 years ago you would have came to me and said we would be using Tracker to run Entero, I would have thought you were bonkers. Well 19 years later we’re finally putting that system out to pasture after a good long life. It has it’s warts and hasn’t any improvement for the past 10 years or so, but it mostly worked. It’s also a core business system and the center of a whole lot of business processes from planning, project management, day to day sprint execution, client billing, builds and anything else related to running a software company.

The first issue was raised May 25,1995 and since then more than 500 unique individuals (both internal and external) have raised just over 85,000 issues. There are 750,000+ time entries totaling over 1.7 million hours of time.

I’m sure you can appreciate replacing a system like this takes a tremendous amount of planning and buy in from basically the whole organization. Getting to the point tonight where this system will be turned off was a massive amount of effort and basically consumed my last year.

This hasn’t just been a systems change. If that was the case, this wouldn’t have been such a huge effort. Don’t get me wrong, the migration from Tracker to Jira is a non trivial task (which I’ll speak more about later), but we’ve coupled the roll out of Jira along with an organizational change and the adoption of ITIL for our client facing support. These three initiatives were very much intertwined as we are basically re-imagining the EnteroOne business unit (which comprises > 80% of Entero).

Those 85,000 issues will now live in one of seven Jira projects. Our “Customer Experience” project consists of the ITIL Incident and Problem Management processes. This is where clients raise issues, change requests, ask for support, training and all manor of help. We have some shiny new workflows to help these teams marshal the dozens and dozens of issues that come in over a week. Jira gives us some nice boundaries to pass work times explicitly to teams. That alone is going to be a huge improvement. A second project holds workflows for Requirements Management and Change Management. Requirements management is responsible for documenting requirements for our very complex system and providing the various development teams with enough information and test scenarios to bring items into our sprint process.

Our operations teams are very happy to finally have their own projects to do both internal systems support as well as external client support.

Migrating to Jira

As I mentioned above, the migration is non trivial. It consists of the following general steps:

1) Grab a snapshot of the most current Tracker data from an Oracle export
2) Run some sql scripts to clean and generally prepare the data
3) Generate comma separated files (using groovy) for each of the projects
4) Import the issues from the CSV files one at a time
5) Run groovy scripts to morph all the description and comment type data into Jira
6) Run sql scripts to populate worklog (time entry), issue links, labels

Start to finish, this process takes about 4.5 hours to run on some decent hardware. I’ve run through four full migrations start to finish this week and dozens and dozens of partial migrations in the recent weeks and months. Tonight is hopefully the last one.

Jira Extentions

We’re using some third party plugins including
Structure plugin from ALM to group and manage issue/task dependencies across multiple project.
Tempo to augment the time entry capabilities of Jira.
Script Runner and the JIRA Misc Workflow Extensions for some fancy transition logic.

We’ve written a couple of plugins to handle activities like patching issues into release branches.

We also wrote a web front end for our customers to use. I know you’re thinking that wait, isn’t Jira a web app already? Why the heck would you build another one? We want to keep the client interaction as clean and as seamless as possible. While I love Jira, it is very powerful and can be complex. We really have a few user stories for our clients. They can raise issues, comment on existing ones, add attachments and close issues. One client can’t see other client’s issues as they can contain very proprietary information even in a simple screen shot. This support portal also has hooks into our documentation and other client related information that isn’t stored in Jira. We wrote a Grails app that uses the REST APIs for Jira and it really came together in a very short amount of time. From the feedback we’ve gotten from our beta testers they think it’s a good improvement from the old web app that we wrote 8 or so years ago.

 

Summary

There are a bunch of us that are really looking forward to using Jira. It’s been a long time coming and the workflows and processes that we’ve put in place are something that the company has needed for a while now. We maturing as a software company and I’m happy to be responsible for bringing in new tools to help this evolution. I’m sure the next few days/weeks are going to be a bit bumpy but it’s going to be great over all. I just know it.

Continue Reading

Gradle Migration Part 1: Importing build.xml

Posted on 04. Jan, 2013 by .

0

Importing Ant Builds

As I mentioned in the introduction, the approach I’m taking is not a whole sale rewrite of the 2500+ line build process but an incremental rewrite of tasks as needed. One of the great features of Gradle is the ability to import existing ant builds and treat the ant targets just like gradle tasks.

This makes migration much more palatable.

Assuming your ant build file is called build.xml you simply need to add the following line to your build.gradle file

ant.importBuild 'build.xml'

Now we can call any ant tasks that are in build.xml but using gradle.

So instead of

ant build

We can call

gradle build

With that simple start we can now run all ant tasks through gradle.

Plugins and ant task conflicts

The next step I wanted to add to the gradle process was to apply the groovy plugin. Using the groovy plugin automatically configures the gradle project with a number of tasks that are useful for building mixed Groovy and Java projects. The link goes into great detail but things like compiling, running tests, building jars etc come for free with the groovy plugin. In the future I will be taking advantage of some of these tasks as I move the build logic from ant into the gradle build.

I did run into one small gotcha when I applied this plugin.

e:\proj\e1>gradle clean-build
[ant:taskdef] Could not load definitions from resource taskdef.properties. It could not be found.
 
FAILURE: Build failed with an exception.
 
* Where:
Build file 'E:\proj\e1\build.gradle' line: 4
 
* What went wrong:
A problem occurred evaluating root project 'e1'.
> Cannot add task ':clean' as a task with that name already exists.
 
* Try:
Run with --stacktrace option to get the stack trace. Run with --info or --debug option to get more log output.
 
BUILD FAILED
 
Total time: 2.511 secs

As I mentioned above, the groovy plugin injects some useful tasks. In my case, I have a number of targets in my ant build file that have the same name and therefore conflict with the tasks injected by the groovy plugin.

Fixing this is fairly straight forward. I swept my build.xml file and added ant. prefixes to any target that conflicted with the groovy plugin tasks.

In my case, these targets

init
clean
build
test

became

ant.init
ant.clean
ant.build
ant.test

With these modifications in place I’m now able to run all my ant tasks directly through gradle. Which is fantastic!

Continue Reading

Migrating from Ant to Gradle

Posted on 16. Dec, 2012 by .

0

This will the first of a few posts in which I will document the moving of a fairly large multi module build from ant to gradle. This will likely take us a few months as there isn’t any short term need to move quickly. Our ant build process does work, it’s just very ugly and is long overdue for a rewrite.

Background

Before I get too far into migration plan and approach, I’ll provide a bit of context that will hopefully help you decided if any of what follows is relevant to you and your project.

The project that we’re looking to migrate to Gradle is a fairly large multi tiered Java/Groovy application. We have a Swing client that is installed on our users desktops and an application server component. We have our own app server that we built and we also (begrudgingly) will deploy the server side code to WebLogic if we have to. Our product is a commercial product that is deployed to our clients which they run on their own hardware/infrastructure.

The first commit in the project was Wed March 10, 2004 and over the past eight+ years has seen over 53,000 commits made by nearly 60 different people. There is between 1.5 and 2 million lines of code including tests. Some of it actually works ;)

There is a “master” build.xml (ant) at the root of the project and then each module has it’s own build.xml file. There are a number of targets in the main build.xml which in turn call targets in the module build.xml as needed. The targets cover all the usual stuff like compiling code, running unit tests and recreating developer database schemas from test data. Additionally there are targets for building installers for both the client and the app server components. There are a lot of other specific targets that are either used by developers as part of doing local builds and bits and pieces needed by the distribution process.

The main build.xml has just over 2600 lines of tasty xml goodness. The module build.xml files have between 50 and 300 lines in them. These build files have seen hundreds of commits from a bunch of folks over the years. You can appreciate that they’ve suffered a fair amount of bit rot.

Why Gradle?

We briefly looked at Maven when it was all the rage a few years ago. There are a few things that Maven did bring to the table. Dependency management and convention over configuration. We manage our dependencies by hand (we still do actually), and our project layout would have required major surgery and rework to get it to line up with the maven conventions. I decided at the time that pain of moving to maven wasn’t worth the time and effort to do so. In retrospect it was exactly the right call.

Fast forward a couple more years and Gradle appears on the scene. I could tell from the very beginning this was going to be the technology that we would use.

1) Build DSL (Domain Specific Language)
2) Builds written in groovy
3) Fantastic documentation
4) Customizable project layouts
5) Plugins and extensible plugin architecture
6) Dependency Management
7) Multi-Project builds

Dipping our toes in

In the early days of Gradle, the ant support wasn’t quite exactly there. Now, it’s fantastic but a year or so ago it was under active development. A lot of other really great features were there and pretty stable so I decided the easiest thing for us to do would be to add a couple of distribution tasks to a build.gradle that would be called once our main build was complete. This was easy to accomplish in hudson simply by installing the gradle plugin (at the time we were using version 0.9) and then adding a build step to call the gradle task in the new build.gradle file.

This worked very well and continues to be the way our sever gets packaged up.

Next Steps

The build is a bit messy in that we have the ant build doing a bunch of tasks, finishing and then a gradle build picking up some of the artifacts that were generated by the ant build. It would be nice to have that running in one technology.

We’re coming to the end of a release cycle and once that’s done, I’ll be making a few minor tweaks to our build.xml such that our existing ant targets can be called by gradle. The minor issue we have is that when I apply the groovy plugin in our build.gradle file, the build ends up with some conflicting tasks. That is we have some tasks in our build.xml like (test, dist etc.) that conflict with the ones that the groovy plugin injects. It’s a simple task of refactoring our build to use names like test_internal and dist_internal so that the names don’t conflict.

Once that is in and stable, the task of rewriting various ant targets into gradle tasks will be undertaken. To be honest I’m not sure how long that will take. Like most shops, we have no end of work in front of us and taking time to rewrite a couple of thousand lines of ant isn’t on the product road map. Likely we will just do it as we need to. That is when an ant target needs some tlc, it will get rewritten as a gradle task. In the perfect world we would just start from scratch but I just don’t see that happening, at least in the short term.

Details

Gradle Migration Part 1: Importing build.xml

Continue Reading

Git Workflow

Posted on 23. Aug, 2011 by .

0

One of the fellows at work asked me to jot down how I use git on our projects. It seems there are as many different ways to use git as there are git users. This is how I’m using git at the moment, your mileage will vary.

Master Branch

I don’t do any work in the master branch. I do keep this branch quite current with the state of trunk in our svn repository. When I’m in the office I’ll do a

1
git svn rebase

to get the latest bits and to keep current with what other folks on the team have been working on.

Feature Branches

This is probably my favourite feature of git and something I knew I was missing in svn that I tried to make work with changesets but it always felt clunky. I usually have at least one fairly substantial feature that I’m working on at any given time. These feature branches tend to be fairly long lived at least a week, possibly more. I create a feature branch using a

1
git checkout -b feature-name

I don’t keep my feature branches up to date with the trunk. The reason I do this is to keep my development work totally separate from what other folks are doing. I only want the commits on this branch to be related to the feature I’m working on. This gives me much more flexibility when I finally want to integrate this back into the trunk. It may be a bit more work to do the merges, but I tend to be in very specific areas of the system that other folks don’t tend to be in so the merges are generally pretty painless.

When I actually want to merge a feature branch these are the steps I go through.

1
2
3
4
git checkout master
git svn rebase
git checkout -b feature-merge
git merge feature-branch

At this point I have a standalone branch that is at the trunk with my feature branch merged in. If there were any conflicts, I would fix them in this integration branch. Once any conflicts are resolved I’ll fire the completed feature into subversion with a

1
git svn dcommit

And if the feature is “done” both the feature branch and the integration branch get deleted with a

1
2
git branch -D feature-branch
git branch -D feature-merge

Why the extra branch? If the merge gets really messy then I can just throw away the feature-merge branch. I’m sure I could accomplish the same thing by using git reflog and a reset but this approach just feels safer to me. This is purely a mental thing and nothing to do with git. In my head I’m thinking “ok, I’ve got this branch here and the trunk there and I want to merge them together over here”. If the wheels falls off I simply git branch -d my feature-merge branch and I’m back to where I started.

Bugfix branches

My workflow for doing bug fixes is similar to the feature branches except these branches tend to be very short lived and I don’t go through the extra integration merge step.

1
2
3
git checkout master
git svn rebase
git checkout -b bugfix

hack, hack hack

1
2
3
4
5
git commit -m "fixed some bugs. created some other ones"
git checkout master
git merge bugfix
git svn dcommit
git branch -d bugfix

These branches are very short lived, maybe an hour or two maybe as long as a day so the chances of conflicts are low.

Patching back into release branches

It’s pretty common for a bugfix to be patched back into at least one release branch in subversion. Sometimes, this may be more than one. Again, this is something that is really easy with git.

Typically I’ll fix the bug in a bugfix branch created from master. I’ll go through the steps outlined above to commit, merge and fire it back into svn.
The biggest difference with this one is that I need to keep track of the commit hash so I can cherry-pick the commit.

I keep meaning to write a bit of bash/groovy/something to automate this a bit but I haven’t taken the time to do that yet.

After committing from master:

1
git log --pretty=format:"%h was %an, %ar, message: %s"

I’ll then copy the hash(es) I need to the clipboard

Next, checkout a release branch

1
git checkout 7.00.42

Then cherry-pick the commit(s)

1
2
git cherry-pick <hash>
git svn dcommit

Repeat the checkout, cherry-pick, dcommit for each release branch this fix needs to be applied to.

Other useful bits

1
git reflog

Git reflog keeps track of the goings on across branches. It’s saved my bacon a couple of times. This stack exchange post was very useful.

Continue Reading

Late to the git party

Posted on 01. May, 2011 by .

0

Introduction

I’ve been playing with git for a couple of weeks now. I must admit that I’m pretty late to the git party and haven’t really seen much need to use a DCVS for our projects at Entero. We’ve used source control since we started 15 years ago. Initially we started with RCS, yes RCS when all the cool kids were using CVS. From RCS we used PVCS. We don’t speak of the dark days of PVCS.

When we started rewriting our products, I knew we weren’t going to be using PVCS and the choice of a version control system was really important. I hadn’t used CVS much but to be honest, it wasn’t getting rave reviews at the time. There was this new upstart VCS called Subversion that was supposed to fix all the issues with CVS. We’ve been using Subversion since November 2003. Subversion has been one of the technologies that has just worked. We have a number of repositories hosting our projects. Our two main products are large. EnteroOne has 38,362 versions as of this writing. Within this set of versions, we have 786 tags and 81 branches. In any given sprint, we have have about 20 people committing content to repository.

It is from this context that I’m coming to git.

git

My first exposure to git was at Java Posse Roundup 2009. There was a fascinating discussion around managing technical debt.It was within this discussion that a number of folks were talking about git and mercurial and how revolutionary they were. As is normally the case when developers are excited about a new technology, they typically talk about the most ‘advanced’ features. The features that advance the state of the art. In the case of DCVSs these concepts were discussed

  • not having a central repository
  • devs being able to pull content from each others working copies
  • devs having entire history locally without the need to be connected to the main repository
  • the pain of merges goes away
  • I remember thinking at the time (and to a certain extent still), that while these features are certainly powerful, what chaos would they introduce into our development process. The notion of developers trying to co-ordinate changes without a central server quite frankly scared the bejuzus out of me. Why would I want to mess with a process and a technology that has served (and continues to serve) us very well?

    I think I’ve come to the conclusion that there is room for both git and subversion in our process. There are a number of great features in git that certain developers on the team would find very useful. However, keeping a central repository that is the one true source of our system still makes sense to me. I know it’s possible to use a model of git as the central repository but I’m not yet convinced that the amount of work to transition the team, build processes and hudson configurations are worth it. Quite likely this will happen over time, but I think the git-svn integration will serve us very well.

    Process

    It’s probably worth a few words about our process. We develop in one month sprints. We release about once a week to QA. Hudson takes care of our incremental builds where a full set of regression including UI tests are done on each commit. At the end of a sprint, the trunk is branched into a release branch. The release branch undergoes two weeks of stabilization where mostly just bug fixes are applied. The release is then shipped to clients. Not every release goes to every client. Some clients take a bunch of releases in a row. Some will take one or two a year. Release branches live as long as we have a client live on that release. New development is rarely “patched-back” into release branches, but it’s been known to happen.

    The team develops on the trunk. We rarely have feature/experimental branches. And of those, they tend to be very short lived, rarely merged back into the trunk in their entirety.

    I know other teams can’t work with this model for some reason. I’m not sure why to be honest with you. We have a large code base (well over 1M lines) and have upwards of 20 people per sprint adding code. Things are generally fairly stable (backed by thousands of tests;though we could use with thousands more I’m sure).

    It’s not perfect but we consistently deliver releases to multiple clients month after month so we must be doing a few things right.

    How I see myself using git

    Given the above process, what would using a DCVS like git buy us?

    Local History

    The first thing is that there are a few of us that work remotely. Some all the time, some occasionally. Having a working copy of the code with history is very helpful. Particularly if that history contains release branches.

    Local Branches

    Coupled with local history is the ability to have local branches. While historically we haven’t felt the need for feature branches, it is common that I’m working on a few different things at the same time. While not ideal it’s common to have a few issues/stories on the go for any number of reasons. In Subversion/Idea I use changesets to keep the development items separate as possible. This works pretty well but it’s still very easy to inadvertently commit changes from one change with another one. With true local branches this is not an issue. Git’s ability to switch between branches seamlessly is awesome. Related to this local branch concept is the freedom to commit frequently. I tend to commit a fair amount in subversion but even then, those commits tend to be complete pieces of work as I don’t want to break the incremental dev builds. With git, it’s nice to commit more frequently as I’m developing, perhaps an idea isn’t fully baked but is good enough to want to keep around. Having a clean working copy is a very liberating feeling and something that I wouldn’t have thought would make a difference but it does. There is something about not having to think about a bunch of in progress changes in a working copy that may or may not be related. It’s changing the way that I code and I like it a lot.

    git svn

    For the time being, perhaps a long time into the future, I see the svn repository being the master and any local git repositories as just that, local. As features are developed, bugs fixed etc, they will be committed to the main repository using git svn dcommit. I haven’t been able to figure out if there is an equivilant concept in git to subversion properties. We use bugtraq properties to relate svn change sets to bugs/requests in our bug tracking system.

    Git Features that intrigue me

    I’m really curious to delve into Cherry Picking and Bisection. As far as I can tell, these concept are unique to dcvs and look very interesting.

    I’ve signed up for a day of training with git hub. I’m hoping that a few more aha moments will happen between now and then but a full day of git goodness should help my understanding along.

    Reference

    Primarily for my own reference, I include the commands to create a local git repository from our svn repo.

      git init (into empty dir)
      git svn init http://[url to root of svn repos] -s
      git svn fetch -r [revno]
      git rebase
      git repack -adf -window 5000 –window-memory=5000
      git gc

    I couldn’t run a git gc without first doing the git repack. git would return an out of memory after processing < 5% of the working copy. The values for window and window-memory were derived using trial and error. This was run on a windows/7 x64 box with 6GB of ram.


    My local git repository doesn’t have any remote branches or tags in it for some reason. I’ve tried creating the local repo with a -s as well as explicitly specifying the trunk, tag and branch options but in both cases I only get one master branch and no tags.

    Update:
    It turns out that the reason I wasn’t getting remote branches and tags was that I populated my git repository with a git fetch instead of a git svn fetch. Now I have tags in my local git repository that line up with svn branch/tag points.

    Summary

    This post has turned into more of a ramble than I would have thought. It’s still early days but like I mentioned, I’m starting to see the git light. I have no doubt that there are going to be lots of bumps along the way but so far I really like what I see.

    Continue Reading

    Network Configuration – Home

    Posted on 26. Dec, 2010 by .

    3

    The Challenge

    Connect my office and our house to the network. No problem right? Small problem, my office is about 95m (300 ft) in a separate building from the house. I briefly thought about running some Cat 5 or fibre between the buildings but figured a wireless solution would be easier. Well, less digging anyway ;-)

    The Gear

    My Internet service is provided by Rogers over their 3G network. It’s a decent service, definitely not as speedy as a Cable solution from someone like Shaw, but decent enough for typical browsing, Skype conversations and VPN to the office. Rogers provides an Ericsson W35 Mobile Broadband Router they’ve branded as the “Rocket Hub”. It’s a combination 3G modem, 4 port ethernet and Wifi (802.11g) device.

    If my office was in the house, this wouldn’t be a very interesting post as at this point I would be done. Plug in rocket hub, connect notebook to wifi network and put feet up. But no, office is in a separate building over 300′ away. Much too far for a wireless signal.

    The W35 doesn’t have an external antenna so that’s the first problem to solve. I need to be able to extend the range of the network to the house and typically the way to do this is with an external antenna of some sort. As it turns out a neighbour of ours had an extra antenna that they weren’t using and gave it to me a while ago.

    In my tickle trunk of hardware bits, I also have a Linksys WRT54G. This is a wireless router that’s been around for a bit and has the added bonus of having a few different firmware options available for it (more about that later). This router does support an external antenna.

    Those components will take care of the office side of things, and even with boosting the signal with the antenna, it won’t be enough to provide a decent signal to the entire house. For that, I have an Apple Airport Extreme Base Station. It can provide Wifi (802.11n) services.

    I figured with these four devices, I should be able to get decent coverage for both my office and the house.

    The Setup

    The first thing I did was disable the Wifi feature of the W35. It’s only role in this configuration would be to send and receive packets from my internal network to the internet.

    The Linksys router was connected to the W35 via ethernet. The Linksys router will provide the wireless signal to my office as well as send the wireless signal over the external antenna to the house. After searching around and talking to a few folks that know a lot more about this sort of thing than I do, I opted to upgrade the firmware on the Linksys router to one that exposes more features of the device. There are a couple of them out there, but I opted to use the dd-wrt firmware found at dd-wrt. The primary reason for doing this was to enable WDS.

    Once I verified that the wireless network was working fine in the office, I ventured outside and mounted the antenna. It’s a directional antenna and so it was oriented towards the house.

    At this point I was able to get a strong working signal on the back deck. As I suspected, the signal wasn’t strong enough in the house.

    In the house, I connected up the Airport Extreme and reconfigured it to use WDS.

    This article was a huge help.

    Once that was setup, everything worked great.

    Summary

    W35 used to connect to Internet. Wifi turned off.
    Linksys WRT54G updated with dd-wrt firmware. Connected to W35 via ethernet (port 1 on both devices) and WDS enabled and configured.
    External Directional Antenna connected to WRT54G and oriented towards house.
    Apple Airport Extreme configured to use WDS.

    HomeWireless-Network.png

    Thanks

    I wanted to add a special thanks to the folks that chimed in on this thread on the Rogers forum. Both “Chris” and “skinorth” had some great suggestions that helped me piece together this solution.

    Continue Reading

    GriffonCast

    Posted on 16. Sep, 2009 by .

    0

    I’ve started a new project with the purpose of promoting the Griffon framework and helping build the community around the project. The GriffonCast is a screencast that I hope to produce at least once a month.

    Griffon is a framework for building rich desktop applications. It leverages the Groovy programming language and a number of the key features of Groovy including builders. It’s a fantastic effort and while it’s still fairly early days, the team is making great headway with the roadmap. The first episode of the GriffonCast is available for viewing and/or downloading. If you have any comments or would like to see specific topics covered drop me a note or post your thoughts on the Griffon mailing list.

    Continue Reading

    Java One 2009 – Online Resources

    Posted on 09. Jun, 2009 by .

    0

    The slides for all the technical sessions have been posted at JavaOne Online Technical Sessions and Labs. You’ll need to be a SDN member but it’s free to sign up.

    The Keynote videos are also available at General Session Details and Video Replays

    There are also a number of audio interviews available at Java One Radio Podcast

    Continue Reading

    Java One 2009 – Summary

    Posted on 07. Jun, 2009 by .

    0

    Summary

    In the blink of an eye the 2009 edition of Java One is over. It was a good conference. Not the best Java One I’ve been to but still very much worth the trip down to San Francisco. The biggest unknown is what impact Oracle’s purchasing of Sun will have on Sun’s direction of Java. There is a ton of momentum in Sun and in the community that Oracle would do well to respect initially. Seeing Larry Ellision on stage speaking with Scott McNealy about investing in the platform is a very good thing. On this, only time will tell. As expected Java/FX featured prominently in many of the sessions. I think this technology has some potential if Sun (and Oracle) stick with it. Sun has a habit of taking things to almost complete and then letting it languish. In many cases the community picks up the loose ends and tries to add libraries, frameworks etc. but that is getting old. There is a lot of engineering resources being placed on Java/FX right now and it looks like it’s tracking pretty well. I’m cautiously optimistic that they can pull this off.

    I continue to be very excited about the momentum behind Groovy and related projects (Grails and Griffon). If nothing else, I came away from this year’s Java One with a renewed sense of optimism about the direction of this language and the community behind it. I got to meet a few people that I’ve only ‘met’ through twitter from the Groovy and Griffon community and that’s great.

    Areas I’ll be digging into further in the coming days/weeks/months

    Teracotta’s Hibernate caching

    JIRA

    Hudson

    Google Collections

    Ribbon Component and the Substance Look and Feel

    Griffon

    Areas that I’ll be keeping my eye on

    Java/FX, specifically components and layout managers.

    Language Workbench from JetBrains

    Here are a few stats:

    3 Keynote presentations attended

    15 Technical Sessions

    7 Birds of a Feather Sessions

    Visited with a ‘bunch’ of vendors including a great hour with the guys at Atlassian talking about Jira, Clover and Bamboo

    Met up and visited with a number of aquaintences and friends that I’ve me over the years at Java One, Java Posse Roundup and now on Twitter.

    Ken, Joe, Carl, Dick, Brendan, Pete, Stefan, Andres, Dave and Fred.

    Java One for me is more about the community and the conversation than it is about the technical content.

    Continue Reading