Adopting Functional Programming Languages (Part 7)

(Note: This is the seventh and final part of a whitepaper I wrote a couple of years ago which I never had a chance to publish in full.  You’ll find the earlier parts here, herehere, here, here and here.)

Conclusion

Functional languages have been growing for over 50 years, mainly in academia and niche applications. However, recently many have evolved into viable choices for general-purpose software development. At the same time the software industry has increased its need for the benefits these languages provide:

  • The increasing complexity of software has brought focus to the benefits of solutions built around the strengths of functional languages.
  • The need to build robust systems running reliably in concurrent environments has made immutability a de facto requirement of these solutions.
  • Data processing intensive requirements (e.g., analytics) are a natural domain for functional languages with their focus on composability and readiness for parallel execution.
  • The easy integration of Scala, Clojure and F# with existing code running on the JVM and CLR has lowered the cost of adopting these languages in a polyglot environment.
  • The ability of these languages to attract high performing software developers makes their adoption a sound move for organisations looking to increase the productivity of their development teams.
  • Software architecture approaches focusing on service composition provide low risk opportunities to experiment with new languages alongside existing codebases.

Adopting Functional Programming Languages (Part 6)

(Note: This is the sixth part of a whitepaper I wrote a couple of years ago which I never had a chance to publish in full.  You’ll find the earlier parts here, herehere, here and here.)

Adoption

We will discuss language selection approaches in the Language selection section but beyond that decision is the pivotal question of where to first trial a functional language within your organisation.

As a general rule, start small. Starting small helps minimize the risk of the adoption. Starting small usually means working on a small piece of software or a small part of a larger piece of software. For many organisations, this is easier said than done. Although there are prevailing trends towards building large solutions as a collection of smaller independent services, many solutions are still built and deployed from a single monolithic codebase. Typically this codebase is written in a single language and executed by a single homogenous process. In this type of environment, choosing to adopt a new language for a piece of the solution presents these choices:

  • Extract some existing functionality into a separate module and re-implement it in the new language. In this context “separate module” needs to be independent from the main codebase at least in terms of development language. On the JVM, a Clojure or Scala module can be developed independently and then deployed as part of the larger single application if needed. In practice, building this new module as a separately deployable service will provide additional benefits to your architecture.
  • Create a separate module for a new piece of functionality and implement it in the new language. This is applying the above thinking to new functionality rather than existing logic.
  • Look for an area of non-production code where adoption experiments can be carried out in an isolated manner. Developer or tester tooling or in-house monitoring tools are all potential places where new languages can be applied.

Not surprisingly, greenfield development provides the most opportunities to choose new languages for all or part of the solution. In the case of a recent experience we had with a client in the US (http://bit.ly/1fYwbZQ), Clojure was used to build a custom CMS forming part of a larger solution.

Polyglot Programming

Adopting a functional language alongside more traditional enterprise languages is embracing polyglot programming – the selective application of several different programming languages to harness their respective strengths (http://bit.ly/1joh7rS). Even when web applications are developed with a single primary language, you will usually find significant amounts of other languages like SQL and JavaScript. Polyglot programming goes a step beyond that and suggests that you consider options such as a highly productive web framework like Ruby on Rails that dispatches to backend web services written in a combination of Java, Erlang (where the Actor model can assist in building robust distributed subsystems) and Haskell (for some heavy data transformation work) where appropriate.

Irrespective of whether you have a driving need or desire to adopt functional languages within your organisation, making the changes necessary to allow this sort of adoption is beneficial for other reasons, best expressed by this quote: “The recurring theme is ‘the right tool for the job’ … the most important lessons in our experience are how to lower the barriers to choosing the right tool” (http://bit.ly/NsF0iR)

People/recruiting

Most experienced IT people admit (sometimes reluctantly) that choice of technology has a relatively low level of influence on the success or failure of a project or product. These people do generally agree that focusing on softer issues like team capability, culture, customer involvement and effective project management are far more likely to influence the outcome. However, there is a very important connection between programming language choice and these key success factors.

Organisations strive to build teams that harness the latent power of technology to provide innovative solutions to their customers. Building these teams requires people with a strong passion in technology and desire to work in productive environments. Harnessing the surge in interest in functional programming can help position your organisation as a destination employer for these types of developer. This will provide short-term benefits through the application of functional programming as well as longer-term advantages of building highly competent development teams.

Indeed, a growing number of developers are investing more and more of their own time in learning and honing their functional skills. These developers see the benefits in this style of development and appreciate how better tools help them serve their employers.

This is the same type of thinking that led to some developers seeing the benefit of a simple, new C-like language without the overhead of explicit memory management in the mid-late 1990s. This was the first wave of developer to adopt Java.

This is the same type of thinking that led to some developers seeing the benefit dynamic languages brought in terms of expressiveness and development speed over their statically typed cousins and surfed the Ruby/Ruby on Rails wave from the mid 2000s.

This is the same type of thinking that will lead to some developers becoming early adopters of the next waves of software development languages and approaches in the years to come.

These types of developer keep a keen eye on what is evolving on the horizon of their industry. They spend considerable (personal) time researching the topics that interest them. They are likely to be well connected into a network of similarly minded people, either through community user groups or common employers. They often move in groups from destination employer to destination employer as they spot environments that value their presence. They are likely to be energetic contributors to open source software and/or have active GitHub accounts.

In short, these types of developer are highly sought after hires for an employer. It is true that there will be a smaller pool to choose from than for Java or C# developers, but the quality of developer will be much higher.

Language selection

If your development group is keen to trial functional programming, there are two broad approaches they can take in the selection of the language. These choices are similar to popular models for spoken language learning in use the world over.

Approach A: Dipping your toes in

Approach B: Immersion

For an existing enterprise development group, an obvious decision might be to stay aligned with the language platform you are currently using and pick a functional language which fits neatly into that platform (e.g., Scala for a Java shop, F# for a Microsoft shop).

There can be downsides to this approach with Scala as it’s very easy for experienced Java developers to continue writing Java code with the occasional Scala idiom for good measure. The easy blend between core Java syntax and Scala’s myriad extensions can actually increase the difficulty in unlearning many of the lessons of an imperative object-oriented background and fully embrace functional thinking.

In this world, you deliberately choose a language that is syntactically separate from any the current development team has used. The alien syntax is a constant reminder that the mental approach needed for this language is different. Examples of languages that could be used in this case include any of the LISP variants (with Clojure being a good choice for Java environments), Haskell or Erlang.

Each approach has its own merits. Approach A will provide the most immediate benefit in terms of adoption with a longer-term risk that the full power of a functional approach may never be reached because of the ever-present temptation to fall back on old habits.   Approach B will include a steeper initial learning curve but should result in developers that have a clearer and deeper understanding of functional programming.

Evaluation

When adopting a new language, it also makes sense to think about how the adoption should be evaluated.

An excellent presentation discussing the issues surrounding language choice is “Nobody Ever Got Fired for Picking Java” (http://bit.ly/OAkv4O) by Alex Payne and he spends a lot of time discussing the difficulty in choosing objective criteria on which to fairly evaluate a number of potential languages. Alex concludes by recommending using a weighted decision matrix to capture the criteria used for the evaluation.

In 2013, ThoughtWorks used a number of criteria to help with language selection with an Australian-based client in the wealth management industry (see http://bit.ly/1gz2c66 for a presentation on this evaluation). The team spent three weeks doing cross-evaluation of Java, Scala and Clojure, taking regular measurements of the three languages using a combination of objective and subjective criteria. See below for a set of graphs showing one of the objective criteria being measured; the amount of time spent in minutes (y-axis) between production code, test code, tool management and library management per day (x-axis).

fp-whitepaper-img-4

The team also used weightings to balance out seven criteria between short-term (i.e., project) and long-term horizons. These criteria appear on the left hand side of the tables below. On the right hand side are subjective evaluations by the team members on how well they thought Clojure (the selected language) would satisfy each of those criteria. Note that the “Ease of maintenance” scores were based on a “throw it over the wall” support model – the team decided to mitigate this risk by rotating the support developers through the development team on a regular basis.

fp-whitepaper-img-5

This approach is by no means the definitive way to evaluate language selection, but it does highlight some areas you may want to consider. Using weighted criteria during evaluation will also help people involved in the evaluation decide on the critical aspects of the experiment.

Adopting Functional Programming Languages (Part 5)

(Note: This is the fifth part of a whitepaper I wrote a couple of years ago which I never had a chance to publish in full.  You’ll find the earlier parts here, herehere and here.)

Benefits

The main benefits of using functional languages stem from their natural expressiveness and the downstream advantages derived from immutability. Languages that are naturally expressive and favour immutability help fight the accidental complexity that inevitably creeps into large codebases.   These benefits also increase as codebases mature and grow.

Expressivity

Naturally expressive languages allow developers to write less code. Less code is beneficial[1] because:

  • There is less code to understand
  • There is less code to maintain
  • There is less code to test
  • There are fewer places for defects to occur

RedMonk used quantitative analysis of updates to open source code repositories on GitHub to rank the expressiveness of languages (http://bit.ly/1gENQ8y). The theory behind this study is that more expressive languages will result in smaller, more isolated changes for each update, leading to less lines of code being changed (e.g., smaller deltas on average) on each update.

See the charts below for our summary of the Redmonk results comparing the various families of language against the number of times they appear in the most expressive 50% of the results or the least expressive 50%. Screenshot 2016-04-21 19.56.47

Screenshot 2016-04-21 19.56.59

Functional languages rank highly on the highly expressive end of the axis; Clojure (#7), Haskell (#10), Racket (#11), Dylan (#12), Emacs Lisp (#14), R (#17), Scala (#18), OCaml (#20), F# (#21), Erlang (#22) and Common Lisp (#23) consume almost half of the top 23 spots on the graph, whereas Java (#44) and C# (#45) are towards the end of the 52-language list. Whilst the conclusions drawn are subject to lots of uncontrolled variables, the results do correlate with the prevailing impression that functional languages naturally produce more expressive code than their object-oriented equivalents.

Immutability

For many developers, immutability becomes an end unto itself, and the core reasons for the benefit are often forgotten. We mentioned previously that programs with immutable data are inherently easier to reason about because time is no longer a variable that can impact the state of the system. And immutable data structures have another huge advantage that is becoming more and more important with enterprise software:

Immutable objects are naturally thread safe

Immutable objects/data structures provide a consistent view to the outside world at all times after construction. Under this context, multiple threads will see the same version of these objects at all times and avoid classic multi-threaded problems like race conditions and deadlocks that are notoriously difficult to reproduce and debug.

Thread safety is becoming increasingly important as CPUs are built around more, comparatively low powered cores. Software must be capable of running in parallel across multiple cores to take advantage of these architectures.

Counter-intuitively, immutable data structures are usually quite performant, although this was not always the case. When new data structures are created, they share as much structure as they can with the source data structure, rather than automatically taking a separate copy of the source data.

Object creation and garbage collection are not “free” operations so if memory consumption and runtime performance are absolutely critical non-functional requirements, you might want to look at languages that give you more control over these things (e.g., C or C++).   However, the need for this optimisation should be proven rather than anticipated as the benefits of low-level control over memory management are usually outweighed by the ongoing cost of living with code written in such languages. This sentiment is embodied by the well-worn approach of “make it work, make it right and, finally, make it fast”.

Using immutability with OO languages

The value of immutability can also be seen in the amount of effort expended in applying it with languages that don’t typically encourage it. For many years, software engineers have realized the benefits immutable data structures bring in terms of reducing complexity in applications. Therefore, many people have codified practices to design immutable solutions in non-functional languages.

Indeed, the fundamental notion of encapsulation (one of the main goals of good object-oriented design) recognizes the need for a class to provide strict control over who has access to its state and how that state is changed. Although some of the theory of encapsulation is focused on who can read the state of a class, much of the benefit comes from narrowing who can update that state. In immutable systems, good encapsulation is still desirable, but less so as updates to this state are not allowed.

Many developers now favour building immutable classes within their OO language of choice. This approach results in classes that contain only property accessors (i.e., getters) and not mutators (i.e., setters). With this pattern, classes are initialized only during construction and provide no public access to change their state beyond construction. This practice does not prevent internal mutation from occurring, but developers can apply discipline to return new instances of objects when they have been mutated internally. For example, in an Account class, you can write a method to cancel the existing account:

     public void cancel(CancellationReason reason);

or you could write a version which cancels and returns a newly-created instance of Account:

     public Account cancel(CancellationReason reason);

Unfortunately, when the language does not enforce immutability, the discipline falls to the developers to determine when and how the Account has been changed. It would be easy for a developer to ignore the return value and continue to work with the existing Account object. Conversely, in a functional world, developers naturally deal with immutable structures because that approach is inline with the principles of the language: working against these principles generates friction between the language and the developers.

[1] Be careful of fixating on a single metric because of the unwanted behaviours this fixation could produce.

Adopting Functional Programming Languages (Part 4)

(Note: This is the fourth part of a whitepaper I wrote a couple of years ago which I never had a chance to publish in full.  You’ll find the earlier parts here, here and here.)

Resurgence

The renewed interest in functional programming has been driven by three main factors:

  • A number of successful high profile organisations making significant investments in these languages
  • The efforts by language designers to lower the barrier to migration from existing languages and platforms
  • The incorporation of functional aspects into non-functional languages

High profile success stories

Being able to cite other organisations that have trodden a similar path always helps language adoption efforts. And when those other organisations are high profile, technically innovative and speak publicly about their technology choices, they become an even stronger role model for other organisations considering the same adoption.

Scala @ LinkedIn

LinkedIn is one of the high profile companies that have gone public with their implementation of Scala. From usage of Scala at LinkedIn, Norbert (http://linkd.in/1dbZsB8) was produced. Norbert provides a framework for distributing simple client/server architectures over a clustered environment to produce highly scalable and fault tolerant solutions for high load environments.

Seamless Java integration, easy concurrency through the Actor model and support for code reuse are listed as being the main reasons for the adoption (http://bit.ly/1gENnmY).

Scala @ Twitter

Twitter uses Scala for much of its infrastructure services (with Ruby/Ruby on Rails for the front-end) including queuing, the social graph store, people search functionality and streaming API (http://bit.ly/1cXIXrW). Beyond the Java integration and code reuse benefits, Twitter highlighted ready access to the language developers and “fast, fun and good for long-running processes” as major reasons for adopting Scala. Furthermore, Twitter has re-invested in the Scala community by releasing numerous libraries it has created as open source software (https://github.com/twitter).

Erlang @ Facebook

Facebook uses Erlang for its Facebook Chat feature and the nature of these services play to the strengths of Erlang’s strong concurrency model. Needing to support upwards of 800 million messages per day across 7 million chat channels, Erlang has served Facebook well in providing the basis for a scalable reliable set of services, upon which the JavaScript and PHP front-end is based.

Haskell @ Finance

The Haskell language is a popular choice amongst corporate banking and trading houses with its usage within ABN AMRO, Bank of America, Barclays, Credit Suisse, Deutschebank, Standard Chartered Bank, as well as non-financial industry companies like Google and Intel (http://bit.ly/1fGjqOJ). The problem domains common to these industries lend themselves to solutions expressed in terms of mathematical functions, making them ideally suited to functional languages.

Clojure @ Netflix

After over a year of investigation, Netflix engineers started investing in Clojure (https://speakerdeck.com/daveray/clojure-at-netflix) for both internal and production services. Netflix’s choice was driven primarily by the abstractions Clojure provides as well as the interactive development model available to Clojure developers via the REPL (Read Evaluate Print Loop) utility. The Netflix story shows a progressive increase in investment in terms of where they introduced Clojure: as their confidence grew, so too did the importance and size of the code that was written in Clojure.

Easy migration paths

LinkedIn and Twitter both declared that Scala’s tight Java integration is an important reason for its selection. With Scala, developers can easily drop down into the Java language to call upon either core Java libraries or 3rd party code that already provides valuable functionality.   Beyond ease of integration at the language level, running on top of the Java Virtual Machine (JVM) gives Scala and Clojure developers the benefit of the thousands of person years invested in developing and tuning the JVM. The same benefits are available to programmers using F# on the Microsoft Common Language Runtime (CLR).

Another common attribute of the organisations listed above is their ability to adopt these languages selectively in small areas due to the nature of their architecture. By composing applications of many independently buildable and deployable services, these organisations can choose a range of different implementation languages and need standardize only on the communication mechanism between the services (e.g. Apache Thrift for Facebook and Twitter).

Functional aspects in non functional languages

Also lowering the barrier to entry to functional programming is the prominence of functional concepts in traditionally non-functional languages. Being able to program with higher order functions when working within another dominant paradigm can help soften the blow for developers who are worried about leaping completely into a functional language.

The introduction of lambda expressions to Java 8 (http://bit.ly/1kGXMzg) is the most recent sign of functional programming influencing the evolution of imperative/OO languages, and C# developers have had access to lambda expressions and higher order functions since 2007. The lingua franca of the web, JavaScript, has supported higher order functions since its creation. Many dynamic languages like Ruby allow blocks to be passed to, and returned from, other functions – emulating the behavior of higher order functions found in many functional languages. With blocks, standard list comprehensions (functions that take lists as parameters and return variations on those lists) like map and reduce can be easily implemented.

It’s also important to note that these features have been introduced for practical reasons. The appropriate use of these features will provide considerable benefits around readability and maintainability by reducing the amount of boilerplate code that is needed for many algorithms.

Adopting Functional Programming Languages (Part 3)

(Note: This is the third part of a whitepaper I wrote a couple of years ago which I never had a chance to publish in full.  You’ll find the earlier parts here and here.)

History

The dawn of functional languages took place when the first implementation of LISP was released in 1959. Since that time, a steady stream of functional languages has been developed. Like many language families, some progeny live long and prosperous lives while others lack the support and mindshare needed to reach critical mass in the market.   The modern languages which are the focus of this paper are still a long way from reaching their golden jubilee, but they are all built upon foundational languages like LISP and ML (originating in the early 1970s).

fp-languages-timeline

A potted timeline of Functional Programming languages

Adopting Functional Programming Languages (Part 2)

(Note: This is the second part of a whitepaper I wrote a couple of years ago which I never had a chance to publish in full.  You’ll find the first part here.)

Definition

Although the functional programming community struggles to agree on an exact definition of what makes a language inherently functional, an agreed common ground exists:

  • Computation via functional composition
  • Higher-order functions
  • Favouring immutability

Functional languages are also declarative, which means they focus on what operations should be executed, rather than how they should be executed. This distinction immediately puts them at odds with the dominant object-oriented/imperative approach to software development and more inline with other declarative languages like SQL, HTML, CSS and the formulae of Excel.

Functional composition

Not surprisingly, the role of functions takes centre stage in the definition of functional languages. While most languages come with similar ways of creating named blocks of code (e.g., methods, procedures, subroutines, etc.), functional languages define this term very specifically. In this world, a function:

  • Creates no side effects
  • References no global state
  • Is referentially transparent

Here is a simple Java function that violates the strict definition of pure functions by outputting content to the console, which is a form of side effect.

fp-whitepaper-code-1

A Java function with a side-effect.

A side effect is any action that changes the execution environment that is not observable through the value returned by the function.   Other common examples of side effects include updating databases or other data stores within the function. Code producing side effects is much harder to reason about because the scope of change produced by the function is so wide.

Here is another version of the previous function that removes the side effect but accesses global state in the form of a collaborator that provides the current exchange rate.

fp-whitepaper-code-2

A Java function using non-local state.

As with side effects, to reason about how this function works, we also need to understand how the collaborator behaves, increasing the complexity of the problem significantly.

The final version of this function is one that includes no side effects and references no global state.

fp-whitepaper-code-3

A pure Java function.

This function is therefore referentially transparent, which means it will return exactly the same value every single time it is evaluated. Referential transparency offers enormous benefits to both the programmer and compiler in terms of caching, parallelisation and other performance optimisations.

The lack of side effect and global state is what classifies a function as a pure function and languages that only provide pure functions are called pure functional languages (e.g., Haskell). In practice, many functional languages are impure and developers make disciplined use of side effects for things like logging and database access.  Pure functional languages like Haskell use concepts like IO monads (see http://www.haskell.org/tutorial/io.html) to perform these types of operations while maintaining referential transparency and without the downside of side effects.

Higher-order functions

Using functions as the centrepiece of computation is not the only key differentiator for functional languages. Functions are first class in these languages and, as such, can be passed as arguments to other functions, as well as returned as values from functions. The combination of these properties defines higher-order functions.

The judicious use of higher-order functions is what provides much of the expressiveness of functional languages and removes the need for much of the boilerplate code found in non-functional languages. Classic examples of this are performing some arbitrary processing (e.g., sorting, filtering, collecting, etc.) on a list of values. Solutions in imperative languages require explicit looping constructs (e.g., a “for” or “while” loop) and references to the start and end of the list and with the processing logic buried inside of the loop. Conversely, functional languages provide a number of higher-order functions that contain the logic necessary to iterate through the loop. Calling these functions requires an additional predicate function that performs the necessary business logic: all loop iteration initialization and control is abstracted away within the higher-order function.

Below is a trivial example of how Java mimics some of the benefits of higher order functions, by using interfaces (in this case java.util.Comparator) to provide function objects. In this example, the ListSorter parameter to the sortUsingComparator method exists only for its implementation of the compare method.

fp-whitepaper-code-4

Using Java interfaces to mimic a higher order function.

Immutability

Another key difference between functional and imperative/object-oriented languages comes from their respective approaches to state management. Imperative/object-oriented systems are built around the notion of ongoing changes to the state of the software at runtime. Conversely, functional systems emphasize immutability of state and seek to minimize the scope of state that is mutable.

At runtime, a Java system is a network of objects with state, represented by the value of the object’s attributes. Over time, different pieces of code will read and write these attributes, mutating the state of the object and its enclosing system. In order to understand what state a particular object is in, you need to understand what values its attributes contain.

By contrast, functional systems maintain no mutable state. At runtime there is no equivalent of a Java object’s attributes that are changed over time. “Variables” can be initialized with values, but once this initialization is done, the values are persisted unless the variables are re-created. Re-assignment is not permitted so the value of a variable doesn’t vary across its lifetime.

Immutability helps greatly when you need to reason about a piece of code within a system, and one of the earliest times this reasoning needs to occur is when you are writing test cases. If you need to test a Java method that takes two arguments and returns a single value (see below), the number of logical test cases is tied to the set of values the two arguments can contain. If that same method also uses the value of one of its object’s attributes, then all the values of that attribute now need to be considered. And the more this method relies on its state, the harder it is to reason about its behaviour.

fp-whitepaper-code-5Conversely, a function in a language such as Haskell will only act upon its arguments and return a result. Building the test cases for this function is a much smaller, quicker and simpler task.

Immutability is not a property uniquely associated with functional languages. Programmers can use discipline to create immutable data structures in any language (more on this in a later post) and several non-functional languages include immutable data structures by default or via third-party libraries.

Adopting Functional Programming Languages (Part 1)

Tags

, , , , , ,

(Note: This is the first part of a whitepaper I wrote a couple of years ago which I never had a chance to publish in full.  I plan to serialise the major sections into separate blog articles here.)

Introduction

Over recent years, there has been a quiet renaissance in a style of programming once isolated to niche fields of academia and computer science: functional programming.  Functional languages like Scala, Haskell and Clojure are attracting significant attention from developers, and aspects of functional programming are now established in many programming languages/platforms, including the industry heavyweights Java and C#.

But what is the catalyst for this renaissance?  The increasing scale and sophistication required of custom software development has led people to reprioritise the benefits of functional languages (e.g., immutability, expressiveness) as a way of increasing code quality, boosting development productivity and reducing complexity and risk.

Alongside this rising popularity, organisational barriers to entry to functional programming have been lowered by the availability of these languages on existing enterprise development platforms (e.g., Clojure and Scala running on the Java Virtual Machine, and F# on the Microsoft CLR) and in some cases augmenting existing languages, as Scala has done with it’s hybrid functional/object-oriented orientation.

A third factor contributing to this increased interest is the cyclical nature of IT in general, as subsequent generations of practitioners rediscovering old approaches.   Agile software development principles hark back to the days before heavyweight command-and-control process took over the IT landscape. The NoSQL database movement is “new” only in the sense that many people have forgotten that non-relational databases were once a viable choice for many data storage requirements.

See “Resurgence” for more information on the driving forces behind the popularity of functional programming.

Why functional programming should be your next strategic direction

So why should businesses care about this revolution? Isn’t the choice of programming language an arcane detail in a world where most software can be accessed as a commodity service in the cloud? In some cases this is true. Every business has certain capabilities that are essentially the same from one organisation to the next. Examples might be maintaining the general ledger or managing payroll. The software to support these capabilities can be acquired off the shelf and adopted with little or no customisation. The programming language those systems are written should not be a concern to the consumer. On the other hand, every business depends on capabilities or processes that differentiate them from their competitors. Since these processes are unique, there is—by definition—no off-the-shelf software to implement them. In a business landscape that is increasingly technology-dependent, the ability to continually develop and maintain custom software ahead of competitors is a universal requirement. This is where functional programming languages come in. Choosing the right language for the job can make a business more responsive to customers, allow it to adapt to market pressures faster than competitors, and attract a different class of technical professionals.

The benefits of building expertise in functional programming are twofold:

  1. Deliver more robust and scalable software solutions
  2. Position your organisation as an employer of choice in the marketplace for high value developers

Hacking ThoughtWorks recruitment revisited (Graduate Code Reviews)

For various reasons, I’ve been doing more than my usual number of code reviews recently, including three graduate code reviews on the same problem in Java over the weekend.  One of the reviews I thought was relatively poor, one was average and one was quite good.

And it occurred to me that it wouldn’t have taken much for each of the lower submissions to be much, much closer to each other.  Many of the ways in which their code fell down are quite simple to address, hence this blog post.

Note: As with the other times I’ve written on our recruitment process, I don’t think I’m giving the game away with anything I write.  In fact, the people who read this post are probably already self-selecting to proceed through our recruitment process quite well😦

Anyway, here are the quick ways for giving your graduate code review solution the best chance to shine.

1. Made your README worthy of reading

From the perspective of a reviewer, I want to minimise the amount of time between when I first get my hands on your submission and when I can run your code.  This means the following tasks need to be as frictionless as possible:

  • Installation
  • Build/package
  • Run

The things likely to become roadblocks in these activities are:

Not having an easy way to build your application: although I’ve long moaned about Maven’s way of working, I’ve come to really appreciate its ability to let me build/package code solutions in Java without really needing to think about it.  Likewise for those projects that use Ant, Gradle, make or any one of a number of other language specific or agnostic build management tools.

Not knowing which version of the language to use: Java 7?  Java 8?  Ruby 1.9.x?  Ruby 2.x?  I can find the right version by trial-and-error compile/run cycles until I have no errors… or you can put the versions into your README and put me out of my misery.

Looking for good examples of README files?  Try any one of a number of the most popular libraries in Github to see what their authors have chosen to include.

2. Do out-of-the-box testing

There are sets of test data supplied with each of our coding problems.  By all means, make sure your solution works according to this data, but don’t consider this data to be exhaustive!  Try another couple of variants of the test data to see how your application behaves in these cases.  This sort of testing might well expose limitations in your assumptions that result in unexpected behaviour in your code.  Whether you choose to address these limitations in the code, or simply state them in your README is then your choice.

3. If using an IDE, listen to the IDE

If you’re writing in a compiled language like C# or Java and using an IDE (something like IntelliJ, Eclipse, Visual Studio instead of Text Mate, Sublime Text, Vim or Emacs), you can make a vast improvement on the first impression people have of your code just by turning on the IDE warnings around code quality and addressing them where needed:

  • If your IDE highlights an unused variable or method, delete it.
  • If your IDE highlights a statement that can be simplified, simplify it.
  • If your IDE highlights an unnecessary initialisation, remove it.
  • You get the idea!

None of these warnings are critical, but all help remove code that will just become a distraction to people reviewing your code.

Good luck with your submissions!

Why Enterprise IT is failing our universities

Tags

, , , ,

I spent quite a lot of time with people responsible for tertiary IT education in 2014 and I feel for them in terms of the pressure they are under from the IT industry, the primary consumer of the principal “product” of universities – graduate students.

Be Prepared!

It’s become part of the collective consciousness that the role of tertiary institutions is to “prepare students for their careers”.  But the definition of “prepared” in this context is driving misguided behaviour from universities in an effort to reach a goal which is frankly unachievable given the constraints under which they operate.

The working definition of “prepared” for graduates leaving university and entering the IT industry seems to be:

Prepared: Capable of contributing to their employers’ business from Day 1.  Requiring little/no ongoing training or mentorship.  Knowledgeable in all current technologies and techniques used by the employer.

In pursuit of this goal, universities build degrees and populate them with a vast range of subjects across the fundamental principles of development, operating systems, networking, databases and softer skills around communication.  But very little of this knowledge will “prepare” graduates as defined above.  So what do universities do?  Into the precious little free space left in most degrees, they add subjects around “modern” computing topics like web development, cloud computing, big data, mobile applications and the like.

While presumably appealing to uninformed students and university marketing teams alike, none of these subjects will cover their respective topics in sufficient depth to appease employers when these graduates arrive in the workforce.  And while these topics are currently trending within the IT industry in general, not every employer has a need for mobile development or big data, for example.

Contrarianism

So I propose a new working definition for “prepared”, one which could change the expectations industry have on IT graduates and allow universities to spend more time on what I see to be the really core skills a graduate needs to have.

Prepared: Enters industry with a passion for application of technology.  Accepts and embraces the lifetime of learning ahead.  Understands the benefit of strong mentors and is capable of growing these relationships.  Accepting that there is no “one right way” in most cases.

Under this definition, universities can focus their efforts on sparking and nurturing that passion.  From there, impress on students how dynamic this field is and how much it is likely to change by the time they graduate.  And from there, place emphasis on how many solutions there are to so many problems, some of which may be completely non-technical.

Preparation: Redux

So how would this approach be reflected in a typical IT degree?

There a variety of ways you could skin this cat, but some of the ways I would consider are:

  • Make attendance at local community groups in relevant areas compulsory for several subjects.  Arrange with the community organisers to put aside one community event for student presentations.
  • Wherever there are leading technical candidates with different emphasis (e.g., OO versus functional approaches to development, RDMBS versus NoSQL databases, teach both simultaneously to help students appreciate the similarities and differences
  • Continue the good work many universities currently do regarding industry based learning, internships, students projects, guest lecturers, etc, etc.
  • Stop trying to build subjects around trending topics because (a) they’ll age quicker than soft cheese on a summer’s day and (b) they’ll inevitably leave students with a poor idea of how that topic is used in industry.
  • And finally, resist further attempts by industry to neglect their duty in continuing the education of their graduates and other employees.

DevOps Melbourne 2014: A Year in Review

Tags

, , ,

At the end of 2014, I’ve stepped down from organising and hosting DevOps Melbourne.  I had the honour of holding this role for 18 months after it was gifted to me by the previous host, Evan Bottcher, and I look forward to seeing the community continue to thrive under the stewardship of the new host, Andrew Jones.

So how was 2014 for DevOps Melbourne?

Frequency/TIMING

As is our relaxed tradition, we continued with bi-monthly meetups in 2014.  As an organiser, this frequency mitigates the major stress of having to run around chasing down presenters and topics.  Having two months between meetups makes this a far more relaxing activity.  Furthermore, I’ve never had the community ask for increased frequency, so I happy I’m not being completely selfish on this front.

We have also settled on a “last Tuesday of every 2nd month” schedule, which seemed to fit nicely with our community and also had the minimum of clashes with other meetups that might share our members.  Finding a good quiet spot in a meetup calendar is key to getting a regular and healthy number of people along to these events.

Within each meetup, I try to arrange for a single “headline” session (30-40 minutes plus Q&A) to finish the night, with one or two “support” acts (lightning talks or 10-15 minute talks).  Given people are generally attending after a full day’s work and are probably already getting tired, erring on shorter rather than longer presentations is a good idea.

Ideally, we ran from 6:30pm to around 8:30pm with a break in the middle for food/drinks/chatting.

content

Here are the titles from all the sessions in 2014:

  • “Server provisioning with Ansible”
  • “Microservice architecture and DevOps – does one imply the other?”
  • “Tomorrrow’s legacy, today.”
  • “Case study: Application of Digital Signal Processing techniques to simplify system data.”
  • “Which conferences should I go to this year?”
  • Datensparsamkeit”
  • “Experimentation at RedBubble”
  • “Navigating the minefield.  Implementing DevOps in a large organisation”
  • Docker + Microservices
  • “Commercialising Free & Open Source Software”
  • DevOpsDays Brisbane: Review
  • “The Ops Dojo
  • “24 Months (at Infoxchange)
  • “Getting Devs to own their Ops at IOOF”
  • “Docker – Containerize all the things!”

Given the People/Process/Tools holy trinity of successful DevOps, I think we struck a good balance between each of these foci.  We are helped tremendously by the presence of the local Infracoders Meetup, which not only gives a specific outlet for those who cannot get enough of Docker the tooling, but also runs each month with good attendance and many presenters.  I dip my hat to Matt Jones and David Lutz, tireless organisers of Infracoders and regular DevOps Melbourne attendees as well.

But organising is not without the occasional conundrum: an issue which crops up regularly is how to handle request for outside commercial involvement.  This involvement normally consists of either:

  1. Offers from companies presenting on their own products/services
  2. Offers to host the meetups at company offices and/or sponsor food or drinks

This first of these issues is sometimes tricky to deal with.  Regular community surveys show that vendor product presentation are not received well, so I feel a responsibility to keep these types of presentations to a minimum.  I try to express on these presenters the need to focus on the tools at a level that will interest the audience and stay away from blatant self promotion.  I also try and limit these types of sessions to shorter durations, especially if I don’t personally know the presenter.

Requests to host DevOps Melbourne at different venues are met with a gentle but firm “thanks, but no thanks”.  We have no good reason to move from our current location, and the extra organisational overhead of doing so is not worth the benefit.

The one time we had another group sponsor drinks this year was a last minute offer by the good folk at Chef, who were in Australia for DevOps Days in Brisbane.

(Full Disclosure: My employer, ThoughtWorks, covers the costs of food at DevOps Melbourne meetups and provides products and consulting services in the DevOps space, so I’m always aware of the potential conflict of interest this puts me at as both a ThoughtWorker and a community leader.)

venue

Since inception, DevOps Melbourne has been completely monogamous in its choice of venue.  The Apartment has been our home for over three years now and provides a good-sized space, fabulous decor and comfortable seating and a large projector screen for our presentations.

Hosting these types of events at a corporate offices (the choice for many community groups) usually results in free drinks and food for the attendees, but I’ve yet to see an office with the ambiance to equal a lounge bar like the Apartment.  As an extra bonus, there’s no need to worry about security for people entering the venue (a non-trivial problem with accessing some offices after hours), or cleaning up🙂

So thank you Sunny, Danielle and the other good folk at the Apartment who keep our community ticking along!

attendance

Firstly, the size of your community on meetup.com doesn’t mean squat.  I got a bit of a cheap thrill when our official membership ticked over 1000 people late in 2014, but typical physical presence at meetups hovered around 30-50 throughout the year.  This is a good number given the size of The Apartment and my desire (on behalf of my employer) to sponsor nibbles during the night.

I usually start each meetup with a straw poll of where the current audience sits on the DevOps spectrum.  Normally there would be roughly equal number of people who primarily identify themselves as “dev” or “ops”, with a handful of “other” for good measure.  This is a good thing; an audience dramatically skewed in either direction means the content is not being curated correctly.

Interestingly, there would also be a large percentage of people who were attending the meetup for the first time.  This suggests to me we have a highly dynamic audience, many of whom might come along to kick the tyres, or are perhaps motivated by a single topic that is being discussed on that evening.  Given the diversity of the topics presented this year, this is perhaps not surprising.

Sadly, no women presented at DevOps Melbourne in 2014.  Not surprisingly (but equally sadly), the number of women in the audience were low: perhaps only a couple each meetup.  For all the talk about the lack of women working in software development roles, this imbalance is far more harshly evident in the operations/DevOps spaces.  I have some thoughts on how to help address this issue in the DevOps world, but these plans lay elsewhere and not in the meetup community, at least not initially.

Looking forward

With the DevOps Days conference coming to Melbourne in 2015, I look forward to seeing how DevOps Melbourne can get involved and what impact the conference will have on the community.

I suspect the 2014 trending themes of containers (hopefully, more than just the D word) and microservices will continue to be strongly represented within the community.

I hope we get a chance to talk security in 2015.  It’s a topic that seems more and more pressing to have a broad understanding of with each passing high profile breach and I would like to see more representation from the security community within the DevOps world.

I’ll be attending DevOps Melbourne regularly during 2015, and am looking forward to being able to focus on the presentations rather than the organisation.  Perhaps i’ll even try and put the presenter hat on at some point during the year.