Lens, State is your father… and I can prove it!

Here it is our new blog post, as a sequel of Lens, State Is Your Father. Today, we’ll try to formalize some informal claims that we did in that article and we’ll emphasize on the relevance of proof assistants in functional programming. We’ve decided to publish the content of this post as a GitHub repo, to get a better formatting of Coq snippets (so please, feel free to send your pull requests to improve it). As usual, we hope you like it!

Posted in coq, Lens, monad, Optics, proof, Scala, State, Type Class | Leave a comment

Meet Stateless in #scalax

In a few weeks, our team will travel to London to attend Scala eXchange 2017.
We’re really excited about it, because we’ll be introducing so-called optic algebras in a lightning talk.

Optic algebras emerge to overcome existing limitations on the standing techniques to handle the data layer of real-world applications. On the one hand, optics support rich patterns to manipulate data, but they’re restricted to immutable data structures. On the other hand, algebraic abstractions such as MonadState provide the means to work with general settings (relational databases, microservices, etc.), but their associated patterns are really poor. Optic algebras attempt to supply rich patterns while remaining general, combining therefore the best of both worlds.

We’ll take this opportunity to premiere our new Scala library, which we’ve affectionately named Stateless. This library exploits the notion of optic algebra and aims at making it easier to deal with the state of your applications. In this sense, you could implement the data layer of your application and its business logic once and for all, using the domain specific language provided by Stateless, and later interpret it into particular state-based technologies. This library thus complements other open source efforts of Habla Computing (our functional architecture studio) such as puretest, to contribute to the functional ecosystem of Scala.

We look forward to seeing you in #scalax!

Posted in Uncategorized | Leave a comment

Don’t Fear the Profunctor Optics (Part 3/3)

Once we’ve seen concrete optics and profunctors, it’s time to introduce the last installment of this series: Profunctor Optics. Here, we’ll see how to encode optics in a profunctor representation, which takes composability to the next level. As usual, your feedback is welcome!

Posted in Uncategorized | Leave a comment

Don’t Fear the Profunctor Optics (Part 2/3)

As promised, here it is the second installment of our series on profunctor optics: Profunctors as Generalized Functions. Today, we’ll introduce several members of the profunctor family (Cartesian, Monoidal, etc.) and we’ll provide their corresponding diagrams, to make them more approachable.

Posted in Uncategorized | Leave a comment

Don’t Fear the Profunctor Optics (Part 1/3)

Today we start a new series of posts on Profunctor Optics. Since WordPress has some limitations to highlight Haskell snippets, we’ve decided to publish it as a Github repo. You can find the first part here: Optics, Concretely. We hope you like it!

Posted in Uncategorized | Leave a comment

Functional APIs: an OOP approach to FP

In the series of posts about the essence of functional programming, we’ve already seen how we can build purely declarative programs using GADTs. This is a picture of what we got (using more standard cats/scalaz data types):


This program above has several advantages over an impure one, given that it completely separates the business logic (the WHAT) from the interpretation (the HOW). This gives us full room of possibilities, since we can change the whole deployment infrastructure without having to change the logic in any way. In other words, business logic changes affect only business logic code and infrastructure changes affect only interpreters (provided that neither of these changes affect the DSL, of course). Some changes of interpretation could be, for instance, running the program using Futures in an asynchronous way, or running it as a pure state transformation for testing purposes using State.

Now, you might be wondering, is OOP capable of achieving this level of declarativeness? In this post, we will see that we can indeed do purely functional programming in a purely object-oriented style. However, in order to do so, the conventional techniques that we normally employ when doing OOP (plain abstract interfaces) won’t suffice. What we actually need are more powerful techniques for building Functional APIs, namely type classes!

The issues of conventional OOP

In OOP, the most common way to achieve declarativeness is by using plain abstract interfaces, of course. In a similar way to the GADT approach, we can acknowledge four parts of this design pattern:

  • Interface/API
  • Method/Program over that interface
  • Concrete instances
  • Composition

Here there is a very illustrative diagram of this approach:


However, this is just one step towards declarativeness; it separates a little bit WHAT and HOW, since IO is an abstract interface, but we still have a very limited range of possible HOWs. This is quite easy to prove just by giving a couple of interpretations we can not implement. These are, for instance, asynchronous and pure state transformations. In the former case, we can’t simply implement the IO signature in an asynchronous way, since this signature forces us to return plain values, i.e. a value of type String in the read case, and a value of type Unit in the  write case. If we attempt to implement this API in an asynchronous way, we will eventually get a Future[String] value, and we will have to convert this promise to a plain String by blocking the thread and waiting for the asynchronous computation to complete, thus rendering the interpretation absolutely synchronous.

object asyncInstance extends IO {
  def write(msg: String): Unit =
    Await.result(/* My future computation */, 2 seconds)
  def read: String = /* Idem */

Similarly, an state-based interpretation won’t be possible. In sum, if we want an asynchronous or a pure state transformer behaviour for our programs, we would have to change the original interface to reflect those changes and come up with two new APIs:

trait IO { // Async
  def write(msg: String): Future[Unit]
  def read(): Future[String]

trait IO { // Pure state transformations
  def write(msg: String): IOState => (IOState, Unit)
  def read(): IOState => (IOState, String)

This is clearly not desirable, since these changes in API will force us to rewrite all of our business logic that rests upon the original IO API. Let’s go ahead and start improving our OOP interfaces towards true declarativeness. As we’ve seen in this pattern, we can distinguish between the abstract world (interface and interface-dependent method) and the concrete world (interface instance and composition).

Abstract world: towards Functional APIs

We may notice that there are not many differences among the three interfaces we’ve shown so far. In fact, the only differences are related to the return type embelishment in each case:


We can factor out these differences and generalize a common solution for all of them; we just need to write our interface in such a way that the instructions (methods) don’t return a plain value, but a value wrapped in a generic type constructor, the so-called embelishment; from now on we will also call those embelishments programs, as they can be considered computations that will eventually return a result value (once the asynchronous computation completes, or when we enact the state transformation).

trait IO[P[_]] {
  def read: P[String]
  def write(msg: String): P[Unit]

// Console
type Id[A] = A
type SynchIO = IO[Id]

// Async
type AsyncIO = IO[Future]

// Pure state transformations
type State[A] = IOState => (IOState, A)
type StateIO = IO[State]

Wow! our new interface is a generic interface, and, more specifically, a type class that solves our declarativeness problem: we can now create interpreters (instances) for both asynchronous and state transformers computations, and for any other program you may think of.

We call this type of class-based APIs functional APIs, due to their ability to totally decouple business logic from interpretation. With our traditional interfaces we still had our business logic contaminated with HOW concepts, specifically with the limitation of running always in Id[_]. Now, we are truly free.

Abstract world: programs

Ain’t it easy? Let’s see what we have so far. We have a type class that models IO languages. Those languages consists on two instructions read and write that returns plain abstract programs. What can we do with this type class already?

def hello[P[_]](IO: IO[P]): P[Unit] =
  IO.write("Hello, world!")

def sayWhat[P[_]](IO: IO[P]): P[String] =

Not very impressive, we don’t have any problem to build simple programs, what about composition?

def helloSayWhat[P[_]](IO: IO[P]): P[String] = {
  IO.write("Hello, say something:")
} // This doesn't work as expected

Houston, we have a problem! The program above just reads the input but it’s not writing anything, the first instruction is just a pure statement in the middle of our program, hence it’s doing nothing. We are missing some mechanism to combine our programs in an imperative way. Luckily for us, that’s exactly what monads do, in fact monads are just another Functional API: 🙂

trait Monad[P[_]] {
  def flatMap[A, B](pa: P[A])(f: A => P[B]): P[B]
  def pure[A](a: A): P[A]

Well, you won’t believe it but we can already define every single program we had in our previous post. Emphasis in the word define, as we can just do that: define or declare in a pure way all of our programs; but we’re still in the abstract world, in our safe space, where everything is wonderful, modular and comfy.

def helloSayWhat[P[_]](M: Monad[P], IO: IO[P]): P[String] =
  M.flatMap(IO.write("Hello, say something:")){ _ => 

def echo[P[_]](M: Monad[P], IO: IO[P]): P[Unit] =
  M.flatMap(IO.read){ msg => 

def echo2[P[_]](M: Monad[P], IO: IO[P]): P[String] =
  M.flatMap(IO.read){ msg => 
    M.flatMap(IO.write(msg)){ _ => 

Ok, the previous code is pretty modular but isn’t very sweet. But with a little help from our friends (namely, context bounds, for-comprehensions, helper methods and infix operators), we can get closer to the syntactic niceties of the non-declarative implementation:

def helloSayWhat[P[_]: Monad: IO]: P[String] =
  write("Hello, say something:") >>

def echo[P[_]: Monad: IO]: P[Unit] =
  read >>= write[P]

def echo2[P[_]: Monad: IO]: P[String] = for {
  msg <- read
  _ <- write(msg)
} yield msg

You can get the details of this transformation in the accompanying gist of this post.

Concrete world: instances and composition

As we said, these are just pure program definitions, free of interpretation. Time to go to real world! Luckily for us, interpreters of these programs are just instances of our type class. Moreover, our console interpreter will look almost the same as in the OOP version, we just need to specify the type of our programs to be Id[_] (in the OOP approach this was set implicitly):

// Remember, `Id[A]` is just the same as `A`
implicit object ioTerminal extends IO[Id] {
  def print(msg: String) = println(msg)
  def read() = readLine

implicit object idMonad extends Monad[Id] {
  def flatMap[A, B](pa: Id[A])(f: A => Id[B]): Id[B] = f(pa)
  def pure[A](a: A): Id[A] = a

def helloConsole(): Unit = hello[Id](ioTerminal)

def sayWhatConsole(): String = sayWhat(ioTerminal)

def helloSayWhatConsole() = helloSayWhat(idMonad, ioTerminal)

def echoConsole() = echo[Id]

def echo2Console() = echo2[Id]

So now, we can start talking about the type class design pattern. In the same way we did with the plan abstract interface design pattern, here it is the diagram of this methodology:


Conventional OOP vs. FP (OO Style) vs. FP (GADT style)

Fine, we’ve seen two ways of defining pure, declarative programs (GADTs and Functional APIs), and another one that unsuccessfully aims to do so (plain OOP abstract interfaces), what are the differences? which one is better? Well, let’s answer the first question for now using the following table:


As you can see, the GADT style for doing functional programming (FP) favours data types (IOEffect and Free), whereas FP in a OO style favours APIs (IO and Monad); declarative functions in the GADT style return programs written in our DSL (IOProgram), whereas declarative functions in FP (OO Style) are ad-hoc polymorphic functions; concerning interpretations, natural transformations used in the GADT style correspond simply to instances of APIs in OO-based FP; last, running our programs in the GADT style using a given interpreter, just means plain old dependency injection in FP OO. As for the conventional OOP approach, you can just see how it can be considered an instance of FP OO for the Id interpretation.

About the question of which alternative is better, GADTs or Functional APIs, there’s not an easy answer, but we can give some tips:

Pros Functional APIs:

  • Cleaner: This approach implies much less boilerplate.
  • Simpler: It’s easier to perform and it should be pretty familiar to any OOP programmer (no need to talk about GADTs or natural transformations).
  • Performance: We don’t have to create lots of intermediate objects like the ADT version does.
  • Flexible: We can go from Functional APIs to GADTs at any time, just giving an instance of the type class for the ADT-based program (e.g., object toADT extends IO[IOProgram]]).

Pros GADTs:

  • More control: In general, ADTs allows for more control over our programs, due to the fact that we have the program represented as a value that we can inspect, modify, refactor, etc.
  • Reification: if you need somehow to pass around your programs, or read programs from a file, then you need to represent programs as values, and for that purpose ADTs come in very handy.
  • Modular interpreters: Arguably, we can write interpreters in a more modular fashion when working with GADTs, as, for instance, with the Eff monad.

Conclusion & next steps

We have seen how we can do purely functional programming in an object-oriented fashion using so-called functional APIs, i.e. using type classes instead of plain abstract interfaces. This little change allowed us to widen the type of interpretations that our OO APIs can handle, and write programs in a purely declarative fashion. And, significantly, all of this was achieved while working in the realm of object-oriented programming! So, this style of doing FP, which is also known as MTL, tagless final and related to object-algebras, is more closely aligned with OO programmers, and don’t require knowledge of alien abstractions to the OO world such as GADTs and natural transformations. But we just scratched the surface, as this is a very large subject to tackle in one post. Some of the topics we may see in the future are:

  • Modular interpreters: How to seamlessly compose interpreters using Functional APIs is another large issue which is currently under investigation. A recent library that aims at this goal is mainecoon.
  • Church encodings: In the GADT approach, declarative functions return programs that will eventually be interpreted, but with Functional APIs, we don’t see any such program value. In our next posts, we will see how the Church encoding allows us to reconcile this two different ways of doing FP.

Last, let us recommend you this presentation where we talk about the issues of this post! All roads lead … to lambda world. Also, you can find the code of this post here.

See ya!

Posted in algebra, functional programming, Scala, Type Class | Leave a comment

From “Hello, world!” to “Hello, monad!” (part III/III)

In the first part of this series, we saw how we can write the business logic of our applications as pure functions that return programs written in a custom domain-specific language (DSL). We also showed in part II that no matter how complex our business logic is, we can always craft a DSL to express our intent. All this was illustrated using the “Fibonacci” example of purely functional programming, namely IO programs. We reproduce bellow the resulting design of the IO DSL and a sample IO program:

  // IO DSL

  sealed trait IOProgram[A]
  case class Single[A](e: IOProgram.Effect[A]) 
    extends IOProgram[A]
  case class Sequence[A, B](p1: IOProgram[A],
    p2: A => IOProgram[B]) extends IOProgram[B]
  case class Value[A](a: A) extends IOProgram[A]

  object IOProgram{
    sealed trait Effect[A]
    case class Write(s: String) extends Effect[Unit]
    case object Read extends Effect[String]

  // Sample IO program

  def echo(): IOProgram[String] =
    Sequence(Single(Read()), (msg: String) =>
      Sequence(Write(msg), (_ : Unit) =>

However, while this design is essentially correct from the point of view of the functional requirements of our little application, and from the point of view of illustrating the essence of functional programming, there are two major flaws concerning two important non-functional guarantees: readability and modularity. Let’s start from the first one!

Note: you can find the code for this post in this repo.

More sugar!

What’s the problem with the little echo function we came up with? Well, this function being pure has an essential advantage: it simply declares what has to be done, and the task of actually executing those programs in any way we want is delegated to another part of the application – the interpreter. Thus, we could run our echo() IO program using the println and readLine methods of the Console; or using an asynchronous library using Future values; or test it without the need of mocking libraries with the help of custom state transformers in a type-safe way. Great, great, great! But … who would ever want to write our pure functions using that syntax? We have to admit that the readability of our little program is poor … to say the least. Let’s fix it!

Smart constructors for atomic programs

We start by adding some lifting methods that allow us to use IO instructions as if they were programs already:

object IOProgram {
  object Syntax{
    val read(): IOProgram[String] = 
    def write(msg: String): IOProgram[Unit] = 

Smart constructors for complex programs

Next, let’s introduce some smart constructors for sequencing programs. We will named them flatMap and map — for reasons that will become clear very soon. As you can see in the following implementation, flatMap simply allow us to write sequential programs using an infix notation; and map allows us to write a special type of sequential program: one which runs some program, transforms its result using a given function, and then simply returns that transformed output.

sealed trait IOProgram[A]{
  def flatMap[B](f: A => IOProgram[B]): IOProgram[B] =
    Sequence(this, f)
  def map[B](f: A => B): IOProgram[B] =
    flatMap(f andThen Value.apply)

Using all these smart constructors we can already write our program in a more concise style:

import IOProgram.Syntax._

def echo: IOProgram[String] =
  read() flatMap { msg =>
    write(msg) map { _ => msg }

Using for-comprehensions

We may agree that the above version using smart constructors represents an improvement, but, admittedly, it’s far from the conciseness and readability of the initial impure version:

def echo(): String = {
  val msg: String = readLine

For one thing at least: in case that our program consists of a long sequence of multiple subprograms, we will be forced to write a long sequence of nested indented flatMaps. But we can avoid this already using so-called for-comprehensions! This is a Scala feature which parallels Haskell’s do notation and F#’s computation expressions. In all of these cases, the purpose is being able to write sequential programs more easily. Our little example can be written now as follows:

import IOProgram.Syntax._

def echo(): IOProgram[String] = for{
  msg <- read()
  _ <- write(msg)
} yield msg

For-comprehensions are desugared by the Scala compiler into a sequence of flatMaps and a last map expression. So, the above program and the flatMap-based program written in the last section are essentially identical.

Hello, Monad!

Let’s deal now with the second of our problems: the one concerning modularity. What’s the problem with the little DSL to write IO programs we came up with? Basically, the problem is that, approximately, half of this data type is not related to input-output at all. Indeed, if we were to write a different DSL to write imperative programs dealing with file system effects (e.g. reading the content from some file, renaming it, etc.), we would almost write line by line half of its definition:

sealed trait FileSystemProgram[A]
case class Single[A](e: FileSystemProgram.Effect[A]) 
  extends FileSystemProgram[A]
case class Sequence[A, B](p1: FileSystemProgram[A], 
  p2: A => FileSystemProgram[B]) extends FileSystemProgram[B]
case class Value[A](a: A) extends FileSystemProgram[A]

object FileSystemProgram{
  sealed abstract class Effect[_]
  case class ReadFile(path: String) extends Effect[String]
  case class DeleteFile(path: String) extends Effect[Unit]
  case class WriteFile(path: String, content: String) 
    extends Effect[Unit]

The only remarkable change is related to the kinds of effects we are dealing with now: file system effects instead of IO effects. The definition of the DSL itself simply varies in the reference to the new kind of effect. This amount of redundancy is a clear signal of a lack of modularity. What we need is a generic data type that accounts for the common imperative features of both DSLs. We can try it as follows:

sealed trait ImperativeProgram[Effect[_],A]{
  def flatMap[B](f: A => ImperativeProgram[Effect,B]) =
    Sequence(this, f)
  def map[B](f: A => B) =
    flatMap(f andThen Value.apply)
case class Single[Effect[_],A](e: Effect[A]) 
  extends ImperativeProgram[Effect,A]
case class Sequence[Effect[_],A, B](
  p1: ImperativeProgram[Effect,A],
  p2: A => ImperativeProgram[Effect,B]) 
  extends ImperativeProgram[Effect,B]
case class Value[Effect[_],A](a: A) 
  extends ImperativeProgram[Effect,A]

Note how the Single variant of the DSL now refers to a (type constructor) parameter Effect[_]. We can now reuse the ImperativeProgram generic DSL in a modular definition of our DSLs for IO and file system effects:

type IOProgram[A] = 
  ImperativeProgram[IOProgram.Effect, A]

type FileSystemProgram[A] = 
  ImperativeProgram[FileSystemProgram.Effect, A]

This ImperativeProgram generic DSL seems pretty powerful: indeed, it encodes the essence of imperative DSLs, and it is actually commonly known through a much more popular name: Free Monad! The definitions of Free that you will find in professional libraries such as cats, scalaz or eff are not quite the same as the one obtained in this post, which is quite inefficient both in time and space (not to mention further modularity problems when combining different types of effects); but, the essence of free monads, namely, being able to define imperative programs given any type of effects represented by some type constructor is there. This substantially reduces the effort of defining an imperative DSL: first, program definition will collapse into a single type alias; second, we will get the flatMap and map operators for free; and, similarly, although not shown in this post, we will also be able to simplify the definition of monadic interpreters (those that translate the given free program into a specific monadic data type, such as a state transformation, asynchronous computation, etc.), amongst many other goodies.

Conclusion: modularity all the way down!

We may say that the essence of functional programming is modularity. Indeed, the defining feature of functional programming, namely pure functions, is an application of this design principle: they let us compose our application out of two kinds of modules: pure functions themselves that declare what has to be done, and interpreters that specify a particular way of doing it. In particular, interpreters may behave as translators, so that the resulting interpretations are programs written in a lower-level DSL, that also need to be interpreted. Eventually, we will reach the “bare metal” and the interpreters will actually bring the effects into the real world (i.e. something will be written in the screen, a file will be read, a web service will be called, etc.).

But besides pure functions, functional programming is full of many additional modularity techniques: parametric polymorphism, type classes, higher-order functions, lazy evaluation, datatype generics, etc. All these techniques, which were first conceived in the functional programming community, basically aim at allowing us to write programs with extra levels of modularity. We saw an example in this post: instead of defining imperative DSLs for implementing Input/Output and File System programs in a monolithic way, we were able to abstract away their differences and package their common part in a super reusable definition: namely, the generic imperative DSL represented by the Free monad. How did we do that? Basically, using parametric polymorphism (higher-kinds generics, in particular), and generalised algebraic data types (GADTs). But functional programming is so rich in abstractions and modularity techniques, that we may have even achieved a similar modular result using type classes instead of GADTs (in a style known as finally tagless). And this is actually what we will see in our next post. Stay tuned!

Posted in Embedded DSLs, functional programming | Tagged , , | 1 Comment