Hello Sparkbox!

sb-logo-buildright-darkBack when I started Cincy Clean Coders, it felt like the Dayton community was nearly nonexistent.  It seemed like I needed to head down to the big city to attract a community of Clean Coders.

Fast forward 2 years… the Cincy group has stopped meeting, Dayton Clean Coders is going strong (with a bad-ass logo), an Elixir Group is forming, and GemCityJS is growing.  There’s Build Guild and others as well.  The Dayton tech scene is thriving!

It turns out the people at the center of making much of this happen are also doing great things for the web at Sparkbox by leading the world in responsive web design.

Starting next week I’m excited to join the awesome Sparkbox team, focusing hard on making the web and Dayton community an even more amazing place!

Between the platform shift to open source, Ruby, Rails, etc. and the challenge of working with some of the most passion devs I’ve met, this looks to be an exciting challenge just to keep up!

Microsoft ALM & MVP Community

This move likely means stepping away from the Microsoft ALM & MVP community.  These have been some of the most passionate and hard working people I’ve met.  They give an immense amount of their time towards helping build great software.

While TFS has it’s fair share of critics, they’re moving in a great direction, fast. And with a leader like Brian Harry making comments like this in private email lists…

I’m not horribly worried about being copied.  If we can make the whole world provide better services, I’m happy.


…I have no doubt that set of tools will be in a great place in the coming years.

Elixir needs at least 4 JSON parsers

Elixir has, at the time of this post, 3 JSON parsers hosted on expm.

Some time back I went looking for one and started exploring elixir-json, including the github repo.  I loved how the encoder used protocols to implement the encoding of different types.  It’s beautiful.  Kudos to @carloslage.

Then I looked at the decoder.  It’s full of elixir-isms… tuples, binary pattern matching, etc.  Over the last few nights I decided to implement my own JSON parser learning from Carlos were I ran aground.

I don’t like the use of HashDict for objects, but spaces rule out most options like Keyword lists.  I’m also not a huge fan of the nested case for dealing with the what’s next? of key/value pairs.

def parse_object( acc, << rest :: binary >> ) do
  { key, rest } = parse_content rest
  { value, rest } = lstrip(rest) |> parse_object_value 

  acc = [ { key, value } | acc ] 

  case lstrip(rest) do
    << ?}, rest :: binary >> -> { HashDict.new(acc), rest }
    << ?,, rest :: binary >> -> parse_object acc, lstrip(rest)

Overall it feels very cohesive.  You can see clearly how the pieces compose after being dispatched from the parse_content.

def parse_content( << m, rest :: binary >> ) when m in ?0..?9, do: parse_number << m, rest :: binary >>
def parse_content( << ?", rest :: binary >> ), do: rest |> parse_string
def parse_content( << ?{, rest :: binary >> ), do: lstrip(rest) |> parse_object
def parse_content( << ?[, rest :: binary >> ), do: lstrip(rest) |> parse_array

I also do nothing to support invalid JSON content as with elixir-json.  Oh well, it was a learning tool.

Restoring TFS 2012 to Sandbox

One of my clients recently asked for some help with getting their TFS server up to snuff.  After getting the software itself in line, I needed to restore a TFS installation in a sandbox environment to plan and test out some significant changes to their Work Item data.  It’s not exactly a smooth experience so lets hope this eases the day of some other poor soul.  As with all things – SSDs make this a bearable duration of time.

Step 1: Install, but don’t configure TFS 2012 Update 3

TFS server components have levels of installation doneness.  Since this is a sandbox, I’m not worried about scaling out and will just install a single server.  To do this I:

  • (15 min) Install Windows Server 2012 Standard on a VM
  • (60 min) Windows Updates… oh mighty Zeus the Updates
  • (10 min) Wipe sweat from brow and walk around office cursing while fuming over what appears to be update 9,123,758 not budging
  • (15 min) Lay down SQL Server 2012 even though production runs on 2008 R2.  Details can be found on Manually Install SQL Server for Team Foundation Server
  • (10 min) Install TFS 2012 with Update 3.  Don’t run the configuration wizard!!!
  • At this point you should have all the pieces of TFS, but nothing configured.  TFS Administration console looks something like this:



    Step 2: Restore TFS Databases

    You can use the Administration console to restore your backups if made from a scheduled TFS backup.  You could also use the tfsrestore.exe tool.  In my case, I had some SQL Service permissions problems and disk space limitations that required me to manually restore each of the databases.



    Step 3: Change Server IDs

    Since I’ll be connecting to these in a non-isolated environment and I don’t want tools getting confused I am going to be safe and change server IDs.  TFS uses Guids to identify Servers and Collections.  Visual Studio and other tools using the SDKs will get confused regarding local settings and caches so this is a safeguard against black magic.


    TfsConfig.exe ChangeServerID /sqlinstance:localhost /databasename:Tfs_Configuration

    Step 4: Fix Accounts

    My new server is not attached to any domain, let alone the one used for the production TFS.  With this in mind, I need to update some accounts embedded in TFS. 

    TFSConfig.exe accounts /resetowner /sqlinstance:WIN-4jsv8rkkbki /databasename:Tfs_Configuration
    TFSConfig.exe accounts /add /accounttype:applicationtier /account:tfsservice "/password:shhhhhh" /sqlinstance:localhost /databasename:Tfs_Configuration

    Step 5: Configure TFS Application Tier


    From the TFS Administration Console click Application Tier and then Configure Installed Features.  This will give you a number of types of configuration scenarios.  We want Application Tier only which well hook up to our now restored Tfs_Configuration database (and friends).


    Make sure you choose the SQL Server to which you restored your databases (localhost for me).  Once you get to the Application Tier settings page, this should show the service account we added in Step 4: Fix Accounts.



    Everything should come out green upon Verification and you can let it rip.  If not, invoke Google duckduckgo.com-fu. 


    Step 6: Update Remaining Settings

    There will still be a number of settings that refer to the old topology. 


    Change the Urls to your new server:


    In this sandbox I have no need of Reporting Services or SharePoint.  I just disabled both:

    image    image


    Step 7: Rest

    It’s certainly not straight forward, but it’s much better than in past versions. 


    At this point you should be able to connect to your newly restored TPC and TPs.  If you find that a previously connected Visual Studio pre-selects Team Projects that were also selected in production you likely skipped Step 3.  This could result in weird behavior for you in the future against production.

    Using HashDict.update for Keyed Reductions (aka group by) in Elixir

    I wanted to start playing with Elixir’s Map and Reduce functions to get a better feel for collection transformations in the language.  For this I grabbed some movie data here and planned on grabbing some perspectives.

    First problem, we need to turn the data into a list of tuples.  The pseudo-transformation we want to apply:

    file -> lines
    lines -> parts
    parts -> tuples

    The results ended up looking like this

    [gist id=”6051117″ file=”movies.exs”]

    For our data perspectives, lets start small.  The number of movies per year.  This is still a transformation, but it’s not going to be a one-for-one.  We’re instead going to reduce the results after mapping.  Why would we map?  Turns out the only thing you need to know is a full list of the movie years… with dups.  With that we can do an “Add or Update” to a hash for each year.

    [gist id=”6051117″ file=”uniqueness.exs”]

    What we’re doing is providing an entry point for count_unique which takes a collection.  This creates a new HashDict which seeds our {year, count} and then recursively calls down into a variant of HashDict.update.  This variant will insert a new key if not found with the 3rd parameter being the seed value.  If the key is found HashDict.update will call our anonymous function to increment the value already found.

    This pattern seems to work well to get the sum also.  Here we map our collection to pull the year and rating.  I adjusted my original to_movies to use String.to_float so that I have a numeric rating.  From there I use the same HashDict pattern with the rating being our seed and accumulator.

    [gist id=”6051117″ file=”sumations.exs”]

    Next we’ll look at doing something a little more interesting by calculating an average and distributing the effort across nodes using our previous parallel map.

    Triangle Kata in Elixir using Erlang processes for Parallel Maps

    Last time I described my setup for learning Elixir, the language built on top of Erlang with a Ruby bent.  Today I wanted to get a hang of processes and mailboxes which are so important to both languages.  To do this I chose the Triangle Kata as my background story.  Take a look:

    Update: You might wonder what that slow function is all about. I added that in there to clarify the parallelism. You would expect the slowest classifications to appears at the end of the resultant list.