Creating a Serial PIC Programmer from an Arduino (Part 1)

A view of the workspace for debugging the PIC programmer

A while back I bought a couple of PIC16F57 (DIP) chips because they were dirt cheap. I figured someday I could use these in something. Yes, I know, this is a horrible way to actually build something and a great way to accumulate junk. However, this time the bet paid off! Only about a year or two too late; but that's beside the point. The problem I now had was that I didn't have a PIC programmer.

When I bought these chips I figured I could easily rig a board up to the chip via USB. Lo and behold, I didn't read the docs properly; this chipset doesn't have a USB to serial interface. Instead, it only supports Microchip's In-Circuit Serial Programming (ICSP) protocol via direct serial communication. Rather than spend the $40 to buy a PIC programmer (thus, accumulating even more junk I don't need), I decided to think about how I could make this happen.

Glancing at some of my extra devices lying around, I noticed an unused Arduino. This is how the idea for this project came to life. Believe me, the irony of programming a PIC chip with an ATMega is not lost on me. So for all of you asking, "why would anyone do this?" the answer is two-fold. First, I didn't want to accumulate even more electronics I would not use often. Second, these exercises are just fun from time to time!

Hardware Design

My prototype's hardware design is targeted to using an Arduino Uno (rev 3) and a PIC16F57. Assuming the protocol looks the same for other ICSP devices, a more reusable platform could emerge from a common connector interface. Likewise, for other one-offs it could easily be adapted for different pinouts. Today, however, I just have the direct design for interfacing these two devices:

PIC Programmer V1 Schematic

Overall, the design can't get much simpler. For power I have two voltage sources. The Arduino is USB-powered and the 5V output powers the PIC chip. Similarly, I have a separate +12V source for entering/exiting PIC programming mode. For communication, I have tied the serial communication pins from the Arduino directly to the PIC device.

The most complicated portion of this design is the transistor configuration; though even this is straightforward. I use the transistor to switch the 12V supply to the PIC chip. If I drive the Arduino pin 13 high, the 12V source shunts to ground. Otherwise, 12V is supplied to the MCLR pin on the PIC chip. I make no claims that this is the most efficient design (either via layout or power consumption), but it's my first working prototype.

Serial Communication with an Arduino

Arduino has made serial communication pretty trivial. The only problem is that the Arduino's serial communication ports are UART. That is to say, the serial communication is asynchronous. The specification for programming a PIC chip with ICSP clearly states a need for a highly controlled clock for synchronous serial communication. This means that the Arduino's Serial interface won't work for us. As a result, we will go on to use the Arduino to generate our own serial clock and also signal the data bits accordingly.

Setting the Clock Speed

The first task to managing our own serial communication with the Arduino is to select an appropriate clock speed. The key to choosing this speed was finding a suitable trade-off between programming speed (i.e. fast baud rate) vs. computation speed on the Arduino (i.e. cycles of computation between each clock tick).

Remember, the Arduino is ultimately running an infinite loop and isn't actually doing any parallel computation. This means that the amount of time it takes to perform all of your logic for switching data bits must be negligible between clock ticks. If your computation time is longer than or close to the clock ticking frequency, the computation will actually impact the clock's ability to tick steadily. As a rule of thumb, you can always set your clock rate to have a period that is roughly 1 to 2 orders of magnitude than your total computation.

Taking these factors into account, I chose 9600 baud (or a clock at 9.6KHz). To perform all the logic required for sending the appropriate programming data bits, I estimated somewhere in the 100's of nanoseconds to 1's of microseconds for computation. Giving myself some headroom, I selected a standard baud rate that was roughly two orders of magnitude larger than my computation estimate. Namely, a period of 104 microseconds corresponds to a 9.6KHz clock.

After completing the project I could have optimized my clock speed. However, that was unnecessary for this project. The clock rate I had selected worked well. The 9600 baud rate is fast enough for timely programming the device because we don't have much data to transmit. Similarly, it provides us a lot of headroom to experiment with different types of computation.

Generating the Clock Signal

While this discussion has primarily focused on the design decisions involved in choosing a clock signal rate, how did we generate it? The process really comes down to toggling a GPIO pin on the Arduino. In our specific implementation, I chose pin 2 on the Arduino. While you can refer to the code for more specific details, an outline of this process follows:

As you can see, "ticking" the clock basically consists of toggling it and then making sure each loop iteration waits for half the clock period. The omitted section for data control is where most of the logic for the controller goes. However, it runs in a time that is far less than 52 microseconds. As a result, the duration of each loop iteration can be considered as:

52\mu s \gg \delta \\
52\mu s + \delta \simeq 52\mu s

where \(\delta\) is the time required to perform computation for data control. Consequently, the clock ticks at an appropriate rate. I have included an image taken from my oscilloscope below.

Oscilloscope measuring clock

This image provides some empirical evidence that what we're doing should work. While there is no data being sent on this image (we'll show more of that below), we can generate a nice clock signal (notice the 1/|dX| and BX-AX lines on the image) at 9.6KHz by toggling the pin and waiting.

Controlling the Data Line

Now that we have a steady clock, we need to control the data line. Writing this section of code felt like I was back in my VHDL/Verilog days. The basic principal--- from a signal generation perspective--- was to only change the data lines on a positive clock edge. There were minor complications for the read data command (since the pin has to go from output to input), but this was an isolated case with a straightforward solution. To actually control the signal, we manually turn the serial data pin (in our case, pin 4) high or low depending on the command and data each clock cycle.

This ICSP programming protocol starts with a 6 bit command sequence. If the command requires data, then a framed 14-bit word (total of 16 bits with the start and stop bits) is sent or received. Command and data bits are sent least significant bit first. In the case of my PIC16F57, the commands are only 4 bits where the upper 2 bits are ignored by the PIC. Likewise, since the PIC16F57 has a 12 bit word, the upper 2 bits of the data word are also ignored while sending and receiving data.

The Load Data Command

Let's first investigate the load data command. This command queues up data to write to the chip. A series of additional commands and delays are executed to flush this data to the chip. The bits for the load command are 0bXX0010 (where X is a "don't care" value). However, let's take a look at it under the oscilloscope:

Load command under oscilloscope
NOTE: The clock in this image is halved (i.e. ~4.8KHz) due to a programming error. This has been fixed in the actual code and doesn't affect results other than the timescale for this plot.

The yellow curve is the clock and the blue curve is our data line. Starting from the left (and reading the blue curve under the yellow "high" marks) we can read our command exactly as intended: 0b0100XX. Notice that it is inverted since our least significant bits are sent first. If you follow along a little bit further on the top, you'll notice a clock-low delay. This delay allows the PIC chip to prepare for data. The data for the command immediately follows the delay.

Implementation Overview

Without going too deeply into the details (again, I refer to the code), the command sequences are modeled as a state machine. Generally, when executing a command, we keep track of the number of steps taken for a particular command already. Since each command consists of sending a finite number of bits, we can now precisely what to do at each step.

The other detail I mentioned earlier was about the read command. This command is sent over pin 4 in output mode, but during the delay this pin must switch to input mode. When in input mode, the PIC chip will proceed to send data at the given memory address. To accommodate this, each command starts by setting the pin as output mode. In the case of the read command, it sets the pin as input when appropriate.


I've enjoyed building out this project. When initially building, I really wanted to discover whether or not I could build a PIC programmer with an Arduino. This post reviews my initial prototype and a high-level description of the Arduino code. Unfortunately, the story doesn't end here.

Due to a variety of limitations, I had to introduce a PC-based controller to stream data to the Arduino. My finished product also removes extra elements (i.e. a second 12.5V power supply) and moves from a breadboard to a more permanent fixture. Even so, I leave these details to a part 2 of this post.

In any case, you can checkout my code from this repo and run it today. While I work on the second part of the write up, you can always read through what I've done. For now though, I will leave you with a picture of some messy breadboarding.

The most amount of work I've ever done to blink an LED!
An LED powered by the PIC chip after programming it with the Arduino Programmer!

Improve Your Reasoning Skills by Studying Mathematics

Mathematics is seemingly one of the most feared subjects in all curricula. At least that appears to be the case here in the United States. While indisputably powerful and useful (look no further than the latest machine learning and artificial intelligence hype), many of us "mere mortals" (i.e. non-mathematicians) feel it's something far beyond our comprehension. Moreover, after we've finished up our course work many of us often stop learning advanced math altogether. This suggests a common sentiment that there is no further use of higher-order mathematics in our everyday lives. I want to reiterate that this is certainly not true. If nothing else, it will improve your reasoning skills if not also your problem abstraction skills.

Mathematics is Not Computation

Much work as an engineer is done using logic, physics, differential equations, and various other forms of esoteric computation. However, memorizing common formulas and plugging/chugging through them is insufficient for being a top-tier engineer. Similarly, performing this sort of rote computation side-steps the importance of learning math in the first place.

The difference between computation and math is like cooking a hot pocket vs. preparing a 4 course meal. Namely, when cooking a hot pocket, everything is prepared for you; all you have to understand is how to operate microwave or oven. On the other hand, a 4 course meal requires the same knowledge of using those tools, however, it requires you to also understand the synergy between flavors and how to transform your raw ingredients into special dishes.

In summary, believing that math is all about "crunching numbers" will only hinder your ability to study it.

Mathematics is Not Easy

No matter what anyone tells you, math is not easy. While learning is highly non-linear and learning rates are highly specific from person to person, I assert that-- in general-- hard work and persistence are required for actually advancing your knowledge. Many of math's groundbreaking theorems and insights come from years of devoted work by some of the world's most intelligent and persistent human beings. Truly understanding all of that insight in a short time span is simply unreasonable.

The point is, do not underestimate yourself and do not overestimate the intelligence of others. Once someone has grasped a concept, it should seem "easy" to them. As a result, if you are speaking with a colleague and they mention something is "easy" that you find complex, just realize that they've somehow captured their idea. Eventually, the same ideas will become natural to you as soon as you wrap your head around them.

Mathematics is Approachable

I've been asked a few times, "where do you study math?" Well, the internet is a good place to start though I personally prefer structured texts for in-depth learning  on a specific topic. Assuming you have the background, a well-structured math text should build a strong foundation for introducing key concepts in any particular field. On the other hand, I often like to delve into areas of math which I know nothing about. At the start of this process, everything is a bit more exploratory and reading a text on the topic is often beyond my of comprehension. In such a situation, I find myself jumping down the rabbit hole trying to gather a strong enough foundation to grapple with the ideas I wanted to learn in the first place.

For instance, suppose you know nothing-- but want to learn-- about affine spaces. If you're unfamiliar with linear algebra or Euclidean spaces, you may want to start learning about those first. Similarly, if you delve into those subjects and don't understand their foundations, recursively repeat the process (a sort of depth-first traversal on the subject-matter). While this may be time-consuming the first few times you do it, eventually you'll start to notice significant overlap in the foundational maths across concepts and this sort of research will become easier over time.

Ultimately, if you take the time to learn the base concepts of a subject, math continues to build in a logical way. As a result, seeing that "natural" progression of building enables higher-level mathematics to be approachable to anyone patient enough to explore fundamental concepts.

Mathematics is Reasoning

Above all, mathematics is the skill of reasoning. Since we have computers in our modern age, it's less useful to optimize our thought processes for crunching numbers. Instead, we should focus on understanding high-level concepts and algorithms. More concretely, understanding underlying proofs of key theorems often points to useful features of the underlying field of math.


Even if your day-to-day work doesn't directly require mathematics, it certainly demands that you reason about situations. By studying more advanced maths, you will find yourself more capable of effectively handling a variety of situations and thinking abstractly. Similarly, studying math will help you to question the "obvious" facts of any situation. All of this will make everyone a better problem solver.

Designing a Concise and Consistent GraphQL API

The web world moves quickly; every developer involved with this ecosystem is painfully aware of this fact. The state-of-the-art stack you start building your startup with today is considered ancient technology within the next 3 months. Even so, there are a few timeless (at least at Javascript-speed) gems produced in this world. GraphQL is a technology that has been around for about 2 years now and it integrates well with the React/Relay stack to provide a more complete ecosystem for a web developer to work in. While Facebook and others have done a fine job discussing the merits of GraphQL and why you would use it (read: I'll let them convince you it's a good idea), we're going to discuss how to harness its power in a very reproducible way.

Difficulties with GraphQL

GraphQL is a great piece of technology. However, as a developer you're left with few guidelines to get moving quickly. In and of itself, GraphQL is a very non-opinionated interchange format. More specifically, it describes a graph-based structure for accessing and modifying data. While data access may seem like a straightforward task when previewing GraphQL, mutations are vague and very under-spec'ed out of box. While in theory (and practice once you've wrapped your head around it) this is actually a good thing for maximum flexibility, it leaves many developers scratching their heads wondering what to do.

GraphQL gains much of its power from this broad flexibility. At the same time, this flexibility creates a lot of cognitive overhead on developers. To date, most-- if not all-- GraphQL API's I have seen are hand-written. This is a cumbersome task. As we move to client-side apps, the backend servers are becoming more like secure datastores which enforce business constraints on data. This was likely always the case in traditional web development as well, but the backend was often merged with UI and UI generation code; as a result, this distinction was blurred. The only other logic performed on these servers is now often private computations which should be concealed from the clients' code. Moreover, with hand-written endpoints, it's difficult to guarantee both API consistency and complete functionality from one endpoint to the next.

Ultimately, this creates an API ecosystem where each endpoint is different. Namely, understanding how to operate one endpoint within my own GraphQL application doesn't necessarily guarantee I can properly operate others (much less other GraphQL apps in general). This would then require copious amounts of documentation on each endpoint for any developer to get up and running effectively. This is more reminiscent of my hardware days-- scanning through datasheets of similar components-- than it is of developing highly generic and reusable software components.

Solving these GraphQL Difficulties

As luck would have it, we already have a solution to eliminating the boilerplate for hand-written endpoints. In 2015, we built Elide. Elide enables developers to model their data using JPA, communicate with arbitrary datastores, secure it using meaningful expressions, and write custom code where necessary. It's been a large-scale, multi-year effort, but it has proven very effective on our own products. In any case, this is a solution designed to solve all the problems that fallout from hand-written endpoints: API consistency, uniform functionality, proper security, developer boredom, etc. The only problem: it didn't support GraphQL (and is only now in the 4.0 beta).

Initially, when we went out to build Elide, rather than reinventing the wheel we sought a standardized web API solution. GraphQL hadn't yet gained any traction (and iirc, it wasn't even fully released). As a result, we found JSON API and decided to build Elide around supporting this technology. It was opinionated and generating the API was straightforward. While JSON API has a lot of strengths, we also found some issues with the opinionated stances in which it took (more on that in a later post). Now you may be thinking to yourself, "Wait a minute, why is this important? I thought this post was on GraphQL." Well, it turns out that working so intimately with JSON API, we used-- what we believe to be-- some of its best ideas to inform ourselves on how to generate uniform, consistent, and automatable GraphQL API's.

Now that we have discovered a solution for minimizing endpoint code, there was a new looming question. How do we generate a consistent GraphQL API? That is, for any endpoint, all you need to know is the data model. From then on, you will have a standard set of tools available to you. Of course you could extend any particular model with special logic as needed, but generally speaking, there would exist a fully supported, common toolset for all models.

Designing a GraphQL API

Our motivation for building out GraphQL support in Elide was to more easily adopt its client-side ecosystem. It will not replace JSON API, but instead live alongside it for users to choose which solution is best for their project. However, this imposes an additional set of constraints; subtle implementation details such as pagination must be compatible with the existing tooling (i.e. Apollo or Relay). But one problem at a time: while GraphQL query-schemes appear to be well worked out, we first need to solve the problem for having a consistent means of object mutation.

Consistent GraphQL API Mutation

Our first goal was to find a consistent way of specifying GraphQL mutations; basically, any time you wish to insert or update data. After several ideas, we came back to an approach inspired by JSON API and REST (though it is certainly not REST). In summary, we turn each JPA relationship and Elide rootable types into objects which take GraphQL arguments. The arguments are op, ids, and data. Without going into all the details (the current spec can show examples), this allows us to support the same operations in a decidable way across all exposed entities in our GraphQL API. A brief description of each parameter is below:

  • op. This parameter describes the operation to be performed. When unspecified, this defaults to a FETCH operation, but it can also take on values such as UPSERT, REMOVE, REPLACE, and DELETE.
  • ids. When provided, this list of id's is used to filter the collection on which the operation is being performed.
  • data. The data argument is used for UPSERT'ing and REPLACE'ing. It specifies the new input data from the user.

With these three arguments, a user can perform arbitrary data operations on their models.

An Apollo/Relay-Ready GraphQL API

Automating an API that is Apollo- and Relay-compatible adds a few more layers of complexity. In short, a model in our originally proposed scheme would look like:

The example above will fetch a book object from the system with id and id of 1. It will return its title and all of the names of its associated authors. Pretty straightforward, right?

Well, when accounting for important concepts like pagination and so forth, this will not work with Relay out of box. As a result, we adopted Relay's Cursor Connections for maximum compatibility. While the scheme is still what we have proposed, there are now 2 additional layers of indirection for each model containing metadata (namely, edges and node objects). These layers are used for additional metadata. See below:

As you can see, the layers of indirection make the format a little bit uglier, but the same overall concepts apply.


GraphQL has many great ideas and an incredible amount of flexibility. However, it's difficult to avoid writing many hand-written endpoints and maintaining consistency across all of them. However, Elide was invented to solve one of these problems and has recently implemented a consistent method for generating GraphQL API's. If you're looking for a quick, out-of-box solution, I recommend considering Elide for your next project (to get started, see the example standalone project). In any case, when you build your next GraphQL API, be sure to think through these problems. If you don't adopt our scheme directly, at least be aware of the problems it solves. Good luck out there!

Programming Styles: Procedural, Object Oriented, and Functional

While there exist many programming paradigms, there are three popular styles frequently used today. Namely, procedural programming, object oriented programming, and functional programming. While many languages adopt features across paradigms, most languages idiomatically prefer one style over the other. That is to say, even though these three methodologies are not mutually exclusive, the practical application within a language may have varying levels of support based on language features and community sentiment.

All of these styles have been known for many years and could easily be a post (or book) on their own. However, I will attempt to briefly introduce these styles in a way that I propose is similar to "past, present, and future." While I strongly believe all three paradigms will continue to exist long into the foreseeable future, I would be remissed to suggest that the industry isn't continuously evolving. As a result, while I believe any strong programmer should-- at the very least-- be adequately familiar with all of these styles, I do believe emphasis should be placed on learning more relevant technologies.

Procedural Programming

Common Languages: C, Fortran, BASIC, COBOL, Go

As you can see by the common language list, procedural programming has all but fallen out of favor for modern language design. With the exception of Go, all the languages on that list are at least 50 years old. While most newer languages are not adopting this programming style, many systems still run on these languages. As a result, they have certainly stood the test of time which would imply there are good ideas here.

So what is this procedural programming thing? In a single sentence: a collection of predefined sequential statements (i.e. procedures) which manipulate system state. In a less dense way, procedural programming allows programmers to write re-usable blocks of code to perform actions within their program. If you've done a lot of programming in the 21st century, this concept may seem painfully obvious to you since almost every modern language has first-class support for methods, libraries, and even package management nowadays.

A key distinction here from other paradigms (i.e. object oriented), however, is that your procedures and data are entirely distinct. The procedure takes some input, mutates it in some way, and optionally returns some status code to the caller.

Procedural Example

Let's examine a bit of C code to see procedural programming in action.

For you C programmers out there, this probably looks pretty familiar: it's a naive, modified memcpy implementation. Namely, if the provided num_bytes value is less than 0 then the algorithm immediately returns with an error code otherwise it proceeds.

This example is demonstrative of typical procedural form. If your application often has to copy memory, you wouldn't want to have to write this procedure several times. You can see that we named our procedure "mycpy" for it to be reused throughout our application. Moreover, our return value is a status code rather than a usable value. Namely, it indicates whether or not our procedure succeeded or failed. Finally, the actual result of our computation is stored in the output argument provided by the caller.

While not all functions will be constructed this way in procedural languages (i.e. it is not uncommon for these languages to return the result directly if it is a primitive type), the general form for complex computations is as follows:

  1. Take the "result" object as input from the caller
  2. Perform predefined computation
  3. Store result in the "result" object (i.e. state mutation)
  4. Return appropriate status code (i.e. if error encountered provide error code, else success code)

In summary, procedural languages exhibit great ideas around code reuse and how to manage mutations and errors cleanly. Code written in procedural style often reads fairly well and is easy to follow for single-threaded applications. However, with all of the state mutation, it can become complicated for multi-threaded applications which are omnipresent in today's technology.

Object Oriented Programming

Common Languages: Java, C++, Python, Objective-C, C#

Another well-known programming paradigm is Object Oriented Programming (OOP). In my estimation-- based on both the job listings I've seen and the solicitations I've received-- this is the most popular style in-use today. If you don't explicitly see familiarity with OOP on the job requirements, you will almost certainly see at least one of these languages listed here on many job requisitions for software engineers today. These languages are typically the "heavy hitters." That is, they're know to be the languages people usually go to for performance, reliability, and scale: especially in well-established and/or enterprise companies.

Object oriented programming is-- as you may have guessed-- centered around the concept of objects. Much like procedural programming, we'll continue to mutate state. However, rather than having the caller join together the data and the procedure, we'll instead combine the two. This is the foundation of what an object is; it's both data and a set of common operations that can perform computation on that data.

At first glance, this sounds like a marginal improvement over procedural programming. The most obvious benefit is that since your methods are now bound to your data, you don't have to pass the data object in. That's great, but is it really worth all this fuss? Well, as it turns out, having objects enables an entirely new class of abstractions to work with. You now have inheritance, encapsulation, and polymorphism (IEP). Those are a couple of big words:

  • InheritanceThis is when an object derives a set of properties (methods and data) from another object. It typically represents an "is a" relationship (i.e. a Car is a Vehicle).
  • Encapsulation. You can now hide all the internal details about your model. In theory, if you have modeled your objects properly, you can avoid leaking any internal details and the caller can use interfaces without having to look at the code.
  • Polymorphism. This allows you to treat a specific type as a more generic type; it's the other side of inheritance. Specifically, if you want to perform an operation on all Vehicle classes in your system, you can do so. Whether you provide it a Car or a Boat-- as long as they both inherit Vehicle-- is irrelevant. You can simply treat them as vehicles without any additional code.

It should now be clearer how OOP can actually be a stark improvement-- by way of code reuse and, ideally, more powerful abstractions-- over procedural programming. While we'll be focusing solely on what we have mentioned for now, OOP enables other design patterns as well (mixins, object composition, etc.) that we will discuss in a separate post. Even though we don't go through them here, curious readers should investigate further to see how these patterns behave and what problems they solve.

Object Oriented Example

Below is an example in Java:

This example continues from where I left off above. That is, there is a base class which we refer to as Vehicle which does not have any implementation itself. However, we have two types of vehicle which inherit the Vehicle class: Car and Boat. Each of these inheriting classes actually implement the getName() function. Finally, if you observe our use of it in our main function, you'll realize that both our Car and Boat are stored in a list of Vehicle objects (i.e. polymorphism). Likewise, we also benefit from code reuse in this abstraction. Observe that we only had to implement a single turnLeft() and getDirection() function, and both vehicles were able to gain this functionality without additional code. The direction data variable is silently stored within the object (i.e. encapsulation). As you can see, putting the data and methods in the same container has provided us with some additional code reuse power.

Overall, object oriented programming is incredibly important in the industry today. While it lends itself well to natural abstractions such as the Vehicle example, it often requires a lot of forethought and/or refactoring to avoid leaky abstractions in complex systems. As a result, design can often become more difficult in object-oriented systems than procedural systems due to its flexibility. However, similar problems still exist in multi-threaded applications as they do in procedural systems: when the codebase becomes large, it is often difficult to follow the mutations through the system. This problem is somewhat mitigated as entire objects can be written to be made thread-safe, but any paradigm which advocates data mutation opens itself up to the same class of problems. While nothing is perfect, OOP seems to strike a balance for most programmers. Namely, it's an understandable concept with great power and flexibility to eliminate code redundancy.

Functional Programming

Common Languages: Haskell, Lisp, OCaml, ML, Scheme

In my opinion, this section begins the future of programming. While functional programming has been around for quite some time, its design eliminates entire classes of problems encountered in other languages. Similarly, many traditional arguments against the practicality of functional programming (i.e. performance) are now negligible for all except the most specialized use-cases. Then again, I likely wouldn't put the JVM on an embedded system either, so this problem is not purely related to the functional paradigm.

Functional programming (FP) takes a step away from what we've been discussing. Rather than thinking about how the computer executes instructions and moves data, we instead look at our problem in a more logical way. There are no longer methods or procedures but instead functions. That is, a set of operations that take input and produce output. Likewise, data and functions are distinct elements; we no longer couple the two like in OOP. One of the most important notions in FP is that of immutable data. Logically, once something is created it cannot be modified. If you want to change an object's values, create a new instance with the updated data and return that to the caller. Before you stop reading here, remember two points I've been making:

  1. Even if there were full copies of your data each time you needed to make a change, processors are fast enough today for most applications
  2. I've mentioned that this is a logical model. More specifically, compilers can optionally optimize in clever ways to mutate data if it is appropriate
    1. The benefit here is that the programmer doesn't have to worry about this and, therefore, mistakes are minimized

Functional programming lends itself well to a lot of other cool concepts (i.e. lazily initialized collections, lazy function evaluation, correctness proofs, etc.), but we don't have the space to go into all of that right here. However, you'll notice almost all modern languages are adopting the functional programming paradigm. While I tout FP as being the "future," the fact is that it's already here. Python has always supported functional map, reduce, lambdas, and list comprehensions. Similarly, C++ and Java have adopted a whole set of functional concepts in their recent releases as first-class citizens in the language. Moreover, with callbacks and the like, Javascript makes use of a lot of functional concepts and it is often idiomatic to write functional Javascript. What I'm getting at is that the industry is clearly learning. There has been a lot of griping that "functional is hard," but as people begin to understand it they are realizing that it is actually a cleaner, more concise way to model the world. Since most people are not writing super specialized embedded systems, functional programming languages often exceed their minimum technological requirements.

Functional Example

As you can tell, I'm a bit biased. I really like functional programming (even though I write mostly OOP at work). In any case, I will provide an example in Haskell to demonstrate some of FP's power. In FP we're going to model things as higher level abstractions. With this in mind, this is neither the cleanest nor most concise way to express this in Haskell, but it should be explicit about what's going on:

In this example, you can notice several things which we have already discussed. First of all, we create a Vehicle class which describes the behavior of all vehicle types in our program. The next bit of code then defines our data objects (i.e. records). You will notice that there is no code associated with these data objects.

Next, we get to a bit of implementation code. Namely, we define that both data types are indeed Vehicle types conforming to the class definition we outlined above. From this point on, we can generically treat each data type as a Vehicle. While this may not seem incredibly useful in this particular example, it enables us to access any code which knows how to operate on the Vehicle class. In this example, those are leftDirection and printVehicleInfo. However, this feature becomes particularly useful for very common operations such as Traversable operations.

Finally, if you direct your attention to the main method, notice the order of our calls. We run a few turnLeft calls on our boat and car before printing their information. Then, after that, we print the original information. It is important to recognize that the original boat and car instances remain unchanged even though the results of turnLeft calls were correct.

While not completely evident here, the power in functional programming is in its ability to generalize concepts. Similarly, since data is separate from functions which operate on it, it also increases code reuse. If you have a data type provided by someone else and you want to perform a certain set of well-defined operations on it, you can overload the appropriate typeclass to do this without having to reimplement any additional functionality. Moreover, the immutability of data structures dramatically improves the logical model of your system. Namely, in multi-threaded applications, nothing can "accidentally" change anything else. This allows us to mostly do away with locks and other pitfalls associated with multi-threaded programming.


There are many programming paradigms and a lot to know about each before determining which is best for your use case. While I cannot assert that there is single best solution for all problems, I do claim that there is often a better choice for a specific problem. I have introduced you to three major styles in programming today that all have practical relevance. It is well worth your time to take a deeper look into any of these paradigms and determine which may be best fit for your next project.

Docker Compose: Creating a Dev Environment like Production

Integration Testing

Integration testing is an important part of any production system. Whether it is automated, performed by a QA team, or done by the developer him- or herself, it is essential that all the product bits are verified before revealing the result to the customer. Generally speaking, integration testing is simply running a set of tests against a production-like environment. That means you remove any testing mocks you may have in place and observe the actual interactions between the various services which make up your product.

Development Issues with Complex Systems

The larger your systems scale, the more difficult it is to test them all on the same box. This simple fact comes from increasing system complexity; as your user-base grows, you utilize more resources, and your software architecture begins to span multiple machines. For instance, if you're building a lot of microservices, then each of these needs to be stood up and configured properly to work on your local box. Now each developer needs to duplicate this work for his or her own local setup.

Eventually, maintaining your local "full-integration" development environment becomes unwieldy and possibly even more difficult to manage than production (due to dependency issues, etc.). What this ultimately means is that developers kill this environment entirely. They write their code and unit tests and then just ship it off to the build pipeline. After waiting a period of time, their code should show up in an "pre"-production environment for a full-scale integration test. At this point, they identify if there are any glaring bugs in their code or some other additional oversight.

The problem here is that this process is inefficient. Not only is this slow, but when you share an environment for testing, it is common that multiple changes are deployed at once. As a result, it can sometimes become unclear which change is causing issues. Now let me make myself clear: you should have an environment that integrates all the latest changes before they go to production. This environment will protect you from a plethora of additional production issues (i.e. logically incompatible changesets), however, it is not the best way for developers to test their code. Developers should have tested their code thoroughly before pushing it to the environment that is going to certify it and send it to production.

Docker Compose

Well, this sure seems like a predicament. I just mentioned that for large services it often becomes incredibly difficult to maintain a local version of your complete system. While this is true, we can significantly reduce the burden with docker compose. Docker compose is a tool for managing multi-container Docker applications. In short, you can define an arbitrary system composed of many containers. This tool provides a perfect foundation for us to reproduce a small-scale version of production that can be run entirely locally.

Using Docker compose should be trivial if you're already deploying your services using Docker containers. If not you should first create Docker images for all of your services; while this is labor intensive, you and your team members can reuse these images in the future.

Our Example

Now that we understand the problem and our tools for solving it, we will work through an example. Below is a diagram describing our scenario.

Network model. WordPress application server connects to Internet and intranet while MySQL DB only connects to intranet.

In summary, we have a basic WordPress setup with a few minor tweaks. Rather than hosting both MySQL and WordPress on the same box, we have separated the concerns. Our WordPress application server is accessible on the open internet and our internal network. Our MySQL server, on the other hand, lives on a separate box only accessible on our intranet to prohibit anyone from external requests directly to the database. This example illustrates how one may naturally expand their services. Similarly, you could generalize this concept to arbitrarily complex networks.

Assuming this is the network we want to model with docker compose, let's take a look at configuration file below.

Without delving too deeply into the configuration format, the most notable information in this configuration file is the virtual networking. We have two different networks-- external-net and backend-- which correspond to Internet and intranet in our diagram, respectively. These networks provide the separation of concern as we had designed above. However, more important than the implementation details is the concept which this represents. Namely, we can specify the images, settings, and networking configuration for our docker containers and reuse this file everywhere. Once this file has been built once, it can be shared with the entire team making local integration testing accessible again. With a little maintenance for the addition of new services, this file can become a more faithful representation of your production environment for developers.


We have briefly discussed a major impediment to local integration testing today; most notably, the growing complexity of our products with microservice architectures. However, since this architecture has many benefits, we need to revisit the way we enable developers to perform more comprehensive testing before pushing their changes into the production build pipeline. We have demonstrated a simple use case of using docker compose to perform this task. In creating a single, shareable representation of the production setup, we can keep developers moving forward and reduce the overall number of bugs merged into mainline code.

The Build Pipeline: From Unit Testing to Production

The Build Pipeline

The build pipeline describes the process by which new code makes its way out to a production environment. One may even consider a developer building code on his or her local machine and manually deploying it to a server a primitive build pipeline. While this approach may work well for small or non-critical operations, it is insufficient for most professional work. Whether you're working in the hottest new startup or for a larger company, defining an effective build pipeline and streamlining your deployment process is of utmost importance. While this article will omit implementation details, I will go through a thorough explanation of each step and why it's important and how it improves the lives of developers and overall stability of products.

Continuous Integration, Continuous Delivery

Before I delve deeper into build pipelines, I want to briefly familiarize the audience with continuous integration, continuous delivery (CICD). This concept has been around for several years, but I have heard grumbles about this from colleagues. In summary, the idea is that every commit to the mainline (i.e. usually master branch in git) is built and continuously tested (i.e. continuous integration) and when all of those tests pass, the code is then deployed immediately to production (i.e. continuous delivery).

Many people claim that such a system sounds good in theory, but always fails in practice. Well, I happen to have it on good authority (i.e. personal experience) that this sentiment is categorically false. Yahoo/Flurry/Oath have been using CICD for some time now and the method works very well. In fact, it saves a lot of headache and avoids many mistakes or potential outages which occur from manual deploys or even gated deploys (the discussion of distinction between the two may be for another time, however).

While I am a proponent of CICD and will center our build pipeline discussion around this idea, I must admit that it does front-load a lot of the work. That is to say, CICD requires a larger upfront investment cost than traditional means of operations and code deployment. While the infrastructure can theoretically be built over a period of time, it is best to have all of the infrastructure in place before releasing your product.

In this way, you will be able to allocate sufficient resources into building a robust system. If the product is released before the CICD infrastructure has been properly laid out, it's very easy to get side tracked into focusing only on improving the product rather than process of releasing changes. This ultimately ends up wasting a significant amount of developer resources. Please note, when I say infrastructure I really mean your deploy scripts or something similar. I expect most companies will not be rolling their own CICD solution and instead use something like Jenkins or Screwdriver.

tl;dr. CICD is great but you need to give it the upfront investment it deserves when you're building a new system. Ensure that the infrastructure is in place (even if not all the testing is finished depending on how fast and loose you're playing) before officially launching your product. See cert

Philosophy of the Build Pipeline

Let's move on and discuss a bit more deeply about the ideas of our build pipeline. In summary, an effective build pipeline should have at at least 3 phases:

  1. Unit testing phase. Often times this is the first step in your build pipeline. Unit testing runs before you've packaged your code for shipping. In the unit testing phase, all unit tests should be run for the codebase that is actively being built. Similarly, you can run "local" integration-style testing (with mocks and so forth) if you have them in this phase.
  2. Smoke testing phase. If you have the resources, you should have a non-production environment which looks nearly identical to your production environment (though probably at much smaller scale). It's even possible to run this environment on a single box if the services won't conflict with each other. Similarly, you would not necessarily use production data in this environment. Most importantly, this environment runs real services. At this point you should run a set of smoke tests which will effectively test basic integration of your services.
  3. Integration testing phase. The final essential component of a build pipeline is the integration testing phase. This phase should deploy your services to a production or production-like environment and verify a full suite of integrations on your production system. With a proper test suite, performing this step enables the developers to find the vast majority of issues before they become customer-facing.

While we have discussed 3 primary components of a build pipeline, this often represents the bare minimum. Build pipelines can be arbitrarily complex and can even include triggering up- or downstream dependencies. No matter how complex your build pipeline dependency graph becomes, these 3 phases should be present in some capacity.

A More Sophisticated Build Pipeline

With the 3 components listed above, we will now go through an example of a more sophisticated build pipeline. While not overly complex, this is a realistic pipeline that one could use to deploy their own code. Again, implementation details are omitted, but the core concepts remain.

Example of build pipeline

A brief explanation of the diagram above follows:

  • Code repository. This is where your raw source code lives. It is likely a version control system (VCS) such as git, svn, or otherwise.
  • Artifact repository. The artifact repository is where your compiled code packages live. For instance, this code be a local artifactory of NPM repository.
  • Unit testing. The unit testing phase is described above. It first pulls in code from your repository, then it runs and verifies its unit testing. Upon successful completion, it will upload a compiled artifact to the artifact repository and trigger the smoke testing job.
  • Smoke testing. Smoke testing is also described as above. It should deploy the latest artifact from the artifact repository and run a series of smoke tests. Upon successful completion, it can optionally tag an artifact as the last smoke verified artifact (to better ensure you never accidentally deploy untested code) and then trigger the pre-prod environment.
  • Pre-Prod Testing. The pre-production environment is an "extra" production box. Namely, one that is either taken out of rotation or a dedicated host (or set of hosts) that are connected to production services but are never actually visible to the outside world. This environment tests your current production setup against the code you wish to deploy (but before you actually deploy it). It should pull the latest available service artifact (unless you tagged an artifact as latest smoke verified) and run a series of typical production-style integration tests. Upon successful completion, it should tag its artifact as the latest verified artifact and trigger the int testing job.
  • Int Testing. Finally, the integration testing is the last step in this build pipeline. Assuming you have a cluster of hosts running your services (read: this is good practice for redundancy) it will take a subset of those hosts out of rotation (OOR); this ensures that the service stays fully available to customers while the deployment is on-going. For the OOR hosts, it will deploy the latest verified service artifact and wait for the service to come up. When the service is ready, it runs the set of integration tests on those boxes. After those boxes have been verified successfully, it will return the OOR hosts back to the production rotation and then take out a different subset. This process repeats for however many distinct subsets exist. That is, if you deploy to a single box at a time and have a 5 box cluster, then this step will repeat 5 times or once per box.

By the end of this build pipeline, your newly built and tested code is fully deployed to production if all steps pass. If at any point the tests fail, the build stops at that point and does not proceed further in the build pipeline. It is important to recognize that during the final integration testing phase that this could, in fact, leave the subset of boxes OOR if the tests fail. As a result, the number of boxes deployed at once should be an acceptable number of failed hosts for your application.


While more complicated build pipelines than we've discussed exist, build pipelines do not need to be complex to be useful. However, there exists a minimum set of functionality they must test for effectiveness. Even the simplest of build pipelines can improve developer productivity and reduce operations mistakes. By simply automating the testing process alone, we've avoided mistakes of human error (i.e. forgot to run a test, skipped a test intentionally, didn't follow deployment steps properly for service, etc.) and ensured that our tests are always properly run. Not only does this avoid error, but it also frees up engineering resources to perform other useful work.

Above all, if you do not currently have a build pipeline, you should consider designing one and implementing it. Not only will it improve the lives of your engineers, it will provide confidence to all of your business units. A proper build pipeline allows everyone in your organization to feel confident about code quality for user-facing products.