## Lens, State Is Your Father

In our last post, we introduced `IOCoalgebra`s as an alternative way of representing coalgebras from an algebraic viewpoint, where `Lens` was used as a guiding example. In fact, lens is an abstraction that belongs to the group of Optics, a great source of fascinating machines. We encourage you to watch this nice introduction to optics because we’ll show more optic examples under the `IOCoalgebra` perspective. While doing so, we’ll find out that this new representation let us identify and clarify some connections between optics and `State`. Finally, those connections will be analyzed in a real-world setting, specifically, in the state module from Monocle. Let us not waste time, there is plenty of work to do!

(*) All the encodings associated to this post have been collected here, where the same sectioning structure is followed.

## Optics as Coalgebras

First of all, let’s recall the `IOCoalgebra` type constructor:

```type IOCoalgebra[IOAlg[_[_]], Step[_, _], S] = IOAlg[Step[S, ?]]
```

As you can see, it receives three type arguments: the object algebra interface, the state-based action or step, and the state type. Once provided, coalgebras are defined as a state-based interpretation of the specified algebra. Take `Lens` as an example of IOCoalgebra:

```trait LensAlg[A, P[_]] {
def get: P[A]
def set(a: A): P[Unit]
}

type IOLens[S, A] = IOCoalgebra[LensAlg[A, ?[_]], State, S]
```

(*) This is a simple Lens, in opposition to a polymorphic one. We will only consider simple optics for the rest of the article.

If we expand `IOLens[S, A]`, we get `LensAlg[A, State[S, ?]]`, which is a perfectly valid representation for lenses as demonstrated by the following isomorphism:

```def lensIso[S, A] = new (Lens[S, A] <=> IOLens[S, A]) {

def from: IOLens[S, A] => Lens[S, A] =
ioln => Lens[S, A](ioln.get.eval)(a => ioln.set(a).exec)

def to: Lens[S, A] => IOLens[S, A] = ln => new IOLens[S, A] {
def get: State[S, A] = State.gets(ln.get)
def set(a: A): State[S, Unit] = State.modify(ln.set(a))
}
}
```

We’ll see more details about lenses in later sections but, for now let’s keep diving through other optics, starting with `Optional`:

```trait OptionalAlg[A, P[_]] {
def getOption: P[Option[A]]
def set(a: A): P[Unit]
}

type IOOptional[S, A] = IOCoalgebra[OptionalAlg[A, ?[_]], State, S]
```

This optic just replaces IOLens’ `get` with `getOption`, stating that it’s not always possible to return the inner value, and thence the resulting `Option[A]`. As far as we are concerned, there aren’t more significant changes, given that `State` is used as step as well. Thereby, we can move on to `Setter`s:

```trait SetterAlg[A, P[_]] {
def modify(f: A => A): P[Unit]
}

type IOSetter[S, A] = IOCoalgebra[SetterAlg[A, ?[_]], State, S]
```

In fact, this is a kind of relaxed lens that has lost the ability to “get” the focus, but is still able to update it. Notice that `set` can be automatically derived in terms of `modify`. Again, `State` is perfectly fine to model the step associated to this optic. Finally, there is `Getter`:

```trait GetterAlg[A, P[_]] {
def get: P[A]
}

type IOGetter[S, A] = IOCoalgebra[GetterAlg[A, ?[_]], Reader, S]
```

This new optic is pretty much like a lens where the `set` method has been taken off, and it only remains `get`. Although we could use `State` to represent the state-based action, we’ll take another path here. Since there isn’t a real state in the background that we need to thread, ie. we can only “get” the inner value, `Reader` could be used as step instead. As an additional observation, realize that `LensAlg` could have been implemented as a combination of `GetterAlg` and `SetterAlg`.

There are still more optics in the wild, such as `Fold` and `Traversal`, but we’re currently working on their corresponding IOCoalgebra representation. However, the ones that have already been shown are good enough to establish some relations between optics and the state monad.

## Optics and State Connections

Dealing with lenses and dealing with state feels like doing very similar things. In both settings there is a state that could be queried and updated. However, if we want to go deeper with this connection, we need to compare apples to apples. So, what’s the algebra for `State`? Indeed, this algebra is very well known, it’s named `MonadState`:

```trait MonadState[F[_], S] extends Monad[F] {
def get: F[S]
def put(s: S): F[Unit]

def gets[A](f: S => A): F[A] =
map(get)(f)

def modify(f: S => S): F[Unit] =
bind(get)(f andThen put)
}
```

This `MonadState` version is a simplification of what we may find in a library such as scalaz or cats. The algebra is parametrized with two types: the state-based action `F` and the state `S` itself. If we look inside the typeclass, we find two abstract methods: `get` to obtain the current state and `put` to overwrite it, given a new one passed as argument. Those abstract methods, in combination with the fact that `MonadState` inherits `Monad`, let us implement `gets` and `modify` as derived methods. This sounds familiar, doesn’t it? It’s just the lens algebra along with the program examples that we used in our last post! Putting it all together:

```trait LensAlg[A, P[_]] {
def get: P[A]
def set(a: A): P[Unit]

def gets[B](
f: A => B)(implicit
F: Functor[P]): P[B] =
get map f

def modify(
f: A => A)(implicit
M: Monad[P]): P[Unit] =
get >>= (f andThen set)
}
```

(*) Notice that we could have had `LensAlg` extending `Monad` as well, but this decoupling seems nicer to us, since each program requires only the exact level of power to proceed. For instance, `Functor` is powerful enough to implement `gets`, so no `Monad` evidence is needed.

Apparently, the only difference among `LensAlg` and `MonadState` lies in the way we use the additional type parameter. On the one hand, `LensAlg` has a type parameter `A`, which we understand as the focus or inner state contextualized within an outer state. On the other hand, we tend to think of `MonadState`‘s `S` parameter as the unique global state where focus is put. Thereby, types instantiating this typeclass usually make reference to that type parameter, as one could appreciate in the `State` instance for `MonadState`. However, we could avoid that common practice and use a different type as companion. In fact, by applying this idea in the previous instance, we get a new lens representation:

```type MSLens[S, A] = MonadState[State[S, ?], A]
```

(*) The isomorphism between `IOLens` and `MSLens` is almost trivial, given the similarities among their algebras. Indeed, you can check it here.

Lastly, we can’t forget about one of the most essential elements conforming an algebra: its laws. `MonadState` laws are fairly known in the functional programming community. However, the laws associated to our `LensAlg` aren’t clear. Luckily, we don’t have to start this work from scratch, since lens laws are a good starting point. Despite the similarity between both packages of laws (look at their names!) we have still to formalize this connection. Probably, this task will shed even more light on this section.

## Monocle and State

Connections between optics and state have already been identified. Proof of this can be found in Monocle, the most popular Scala optic library nowadays, which includes a state module containing facilities to combine some optics with State. What follows is a simplification (removes polymorphic stuff) of the class that provides conversions from lens actions to state ones:

```class StateLensOps[S, A](lens: Lens[S, A]) {
def toState: State[S, A] = ...
def mod(f: A => A): State[S, A] = ...
def assign(a: A): State[S, A] = ...
...
}
```

For instance, `mod` is a shortcut for applying `lens.modify` over the standing outer state and returning the resulting inner value. The next snippet, extracted from Monocle (type annotations were added for clarity), shows this method in action:

```case class Person(name: String, age: Int)
val _age: Lens[Person, Int] = GenLens[Person](_.age)
val p: Person = Person("John", 30)

test("mod") {
val increment: State[Person, Int] = _age mod (_ + 1)

increment.run(p) shouldEqual ((Person("John", 31), 31))
}
```

That said, how can we harness from our optic representation to analyze this module? Well, first of all, it would be nice to carry out the same exercise from the `IOLens` perspective:

```case class Person(name: String, age: Int)
val _ioage: IOLens[Person, Int] =
IOLens(_.age)(age => _.copy(age = age))
val p: Person = Person("John", 30)

test("mod") {
val increment: State[Person, Int] =
(_ioage modify (_ + 1)) >> (_ioage get)

increment.run(p) shouldEqual ((Person("John", 31), 31))
}
```

Leaving aside the different types returned by `_age mod (_ + 1)` and `_ioage modify (_ + 1)`, we could say that both instructions are pretty much the same. However, `mod` is an action located in an external state module while `modify` is just a primitive belonging to `IOLens`. Is this a mere coincidence? To answer this question, we have formalized this kind of connections in a table:

Monocle State-Lens Action IOLens Action Return Type
`toState` `get` `State[S, A]`
? `set(a: A)` `State[S, Unit]`
? `gets(f: A ⇒ B)` `State[S, B]`
? `modify(f: A ⇒ A)` `State[S, Unit]`
`mod(f: A ⇒ A)` ? `State[S, A]`
`modo(f: A ⇒ A)` ? `State[S, A]`
`assign(a: A)` ? `State[S, A]`
`assigno(a: A)` ? `State[S, A]`

What this table tells us is how the actions correspond to each other. For instance, the first raw shows that toState (from Monocle) corresponds directly with get (from our IOLens), both generating a program whose type is `State[S, A]`. The second raw contains a new element ?, which informs us that there’s no corresponding action for set in Monocle. Given the multitude of gaps in the table, we could determine that we’re dealing with such different stuff, but if you squint your eyes, it’s not hard to appreciate that `mod(o)` and `assign(o)` are very close to `modify` and `set`, respectively. In fact, as we saw while defining `increment`, `mod` is just a combination of `get` and `modify`. So, it seems to exist a strong connection between the `IOLens` primitives and the actions that could be placed in the state module for lenses. The obvious question to be asked now is: Is there such a connection between the state module and other optics? In fact, Monocle also provides facilities to combine `State` and `Optional`s, so we can create the same table for it:

Monocle State-Optional Action IOOptional Action Return Type
`toState` `getOption` `State[S, Option[A]]`
? `set(a: A)` `State[S, Unit]`
? `gets(f: A ⇒ B)` `State[S, Option[B]]`
? `modify(f: A ⇒ A)` `State[S, Unit]`
`modo(f: A ⇒ A)` ? `State[S, Option[A]]`
`assigno(a: A)` ? `State[S, Option[A]]`

Again, the results are very similar to the ones we extracted from `IOLens`. In fact, we claim that any `IOCoalgebra`-based optic which can be interpreted into `State` may contain a representative in the state module, and the actions that the module may include for each of them are just its associated primitives and derived methods. But, what about `Getter`s, where both `State` and `Reader` are suitable instances? Well, the `State` part is clear, we can add a new representative for `Getter` in the state module. However, the interesting insight comes with `Reader`: identifying new interpretations means identifying new modules. In this sense, we could consider including a new module reader in the library. Obviously, we could fulfill that module by following the same ideas that we showed for state.

To sum up, by following this approach, we have obtained a framework to systematically determine:

• The appropriateness of including a new module.
• The optics that it may support.
• The methods it may contain for every optic.

This is a nice help, isn’t it?

## Discussion and Ongoing Work

Today, we have seen that `IOCoalgebra`s served us two purposes, both of them involving understandability. First of all, we have identified an unexpected connection between `Lens` and the `State Monad`. In fact, we have defined `Lens` in terms of `MonadState`, so we had to explain `Lens` who was his biological father, and that was tough for her! Secondly, we have described a systematic process to create and fulfill Monocle’s peripheral modules, such as state. In this sense, if we go one step further, we could think of those peripheral modules as particular interpretations of our optic algebras. This perspective makes the aforementioned process entirely dispensable, since optic instances would replace the module itself. As a result, logic wouldn’t end up being contaminated with new names such as `assign` or `mod`, when all they really mean is `set` and `modify`, respectively.

As we mentioned before, we still have to translate other optics into their corresponding `IOCoalgebra` representation and identify the laws associated to the algebras. Besides, we focused on simple optics, but we should contemplate the polymorphic nature of optics to analyze its implications in the global picture. Anyway, optics are just a source of very low-level machines that conform one of the first steps in the pursue of our general objective, which is programming larger machines, ie. reactive systems, by combining smaller ones. It’s precisely within this context where our optics, in combination with many other machines from here and there, should shine. In this sense, there’s still a lot of work to do, but at least we could see that isolating algebras from state concerns has turned out to be a nice design pattern.

Posted in algebra, coalgebra, Lens, Optics, State, Type Class | Leave a comment

## Yo Dawg, We Put an Algebra in Your Coalgebra

As Dan Piponi suggested in Cofree Meets Free, we may think of coalgebraic things as machines with buttons. In this post, we take this metaphor seriously and show how we can use algebras to model the Input/Output interface of the machine, i.e. its buttons. Prior to that, we’ll make a brief introduction on coalgebras as they are usually shown, namely as F-coalgebras.

## What are F-coalgebras?

F-coalgebra (or functor-coalgebra) is just a reversed version of the more popular concept of F-algebra, both of them belonging to the mystical world of Category Theory. The most widespread representation of an F-algebra is

```type Algebra[F[_], X] = F[X] => X
```

(using Scala here) Paraphrasing Bartosz Milewski, “It always amazes me how much you can do with so little”. I believe that its dual counterpart

```type Coalgebra[F[_], X] = X => F[X]
```

deserves the very same amazingness, so today we’ll put focus on them.

Given the previous representation, we notice that F-coalgebras are composed of a carrier `X`, a functor `F[_]` and a structure `X => F[X]` itself. What can we do with such a thing? Since we are just software developer muggles (vs matemagicians), we need familiar abstractions to deal with coalgebras. Therefore, we like to think of them as machines with buttons, which know how to forward a particular state (maybe requiring some input) to the next one (maybe attaching some output along) by pressing the aforementioned buttons. Now, let’s find out some examples of mainstream machines that we, as functional programmers, already know:

```// Generator Machine (Streams)
type GeneratorF[A, S] = (A, S)
type Generator[A, S]  = Coalgebra[GeneratorF[A, ?], S]

// Mealy Automata Machine
type AutomataF[I, S] = I => (Boolean, S)
type Automata[I, S]  = Coalgebra[AutomataF[I, ?], S]

// Lens Machine
type LensF[A, S] = (A, A => S)
type Lens[A, S]  = Coalgebra[LensF[A, ?], S]
```

Firstly, let’s expand `Generator[A, S]` into `S => (A, S)` which is something easier to deal with. Indeed, it’s just a function that, given an initial state `S`, it returns both the head `A` and the tail `S` associated to that original state. It’s the simplest specification of a generator machine that one could find! Given a concrete specification and once provided an initial state, we could build a standard `Stream` of `A`s.

Secondly, we showed a Mealy `Automata`. Again, let’s turn `Automata[I, S]` into `S => I => (Boolean, S)` to see it clearer: given the current state `S` and any input `I` we can determine both the finality `Boolean` condition and the new state `S`.

Finally, we saw `Lens`. Notice that the type parameters are reversed if we compare this lens with the “official” representation (eg. lens, Monocle, etc.). This is just to provide homogeneity with the rest of machines, where the state `S` is kept as the last parameter. As usual, let’s expand `Lens[A, S]` to obtain `S => (A, A => S)`. This tell us that given an initial state `S`, we could either get the smaller piece `A` or set the whole state with a brand new `A`.

So far, we have seen the typical representation for some prominent coalgebras. On the other hand, we claimed that we like to think of those coalgebras as machines with buttons that let us make them work. That machine abstraction seems nice, but I agree it’s difficult to see those buttons right now. So, let’s find them!

## Coalgebras as machines? Then, show me the buttons!

As promised, we’ll dive into F-coalgebras to find some buttons. I anticipate that those buttons are kind of special, since they could require some input in order to be pressed and they could return some output after that action. We’re going to use `Lens` as a guiding example but we’ll show the final derivation for our three machines at the end as well. So, we start from this representation:

```type Lens[A, S] = S => (A, (A => S))
```

If we apply basic math, we can split this representation into a tuple, getting an isomorphic one:

```type Lens[A, S] = (S => A, S => A => S)
```

Trust me when I say that every element in this tuple corresponds with an input-output button, but we still have to make them uniform. First of all, we’re going to flip the function at the second position, so the input for that button stays in the left hand side:

```type Lens[A, S] = (S => A, A => S => S)
```

Our button at the first position has no input, but we can create an artificial one to make the input slot uniform:

```type Lens[A, S] = (Unit => S => A, A => S => S)
```

Once provided the input for the buttons, we reach different situations. On the first button there is `S => A` which is a kind of observation where the state remains as is. However, in the second button, there is `S => S` which is clearly a state transformation with no output attached to it. If we return the original state along with the observed output in the first button and provide an artificial output for the second one, we get our uniform buttons, both with an input, an output and the resulting state.

```type Lens[A, S] = (Unit => S => (S, A), A => S => (S, Unit))
```

If we squint a bit, we can find an old good friend hidden in the right hand side of our buttons, the State monad, leading us to a new representation where both tuple elements are Kleisli arrows:

```type Lens[A, S] = (Unit => State[S, A], A => State[S, Unit])
```

Finally, we can achieve a final step, aiming at both naming the buttons and being closer to an object-oriented mindset:

```trait Lens[A, S] {
def get(): State[S, A]
def set(a: A): State[S, Unit]
}
```

So here we are! We have turned an F-coalgebra into a trait that represents a machine where buttons (get & set) are certainly determined. Obviously, pressing a button is synonym for invoking a method belonging to that machine. The returning value represents the state transformation that we must apply over the current state to make it advance. If we apply the same derivation to streams and automata we get similar representations:

```trait Generator[A, S] {
def head(): State[S, A]
def tail(): State[S, Unit]
}

trait Automata[I, S] {
def next(i: I): State[S, Boolean]
}
```

We’re glad we found our buttons, so we can reinforce the machine intuition, but stranger things have happened along the way… The coalgebraic Upside Down world is not quite far from the algebraic one.

## Buttons are Algebras

In the previous section we made a derivation from the Lens F-coalgebra to a trait Lens where buttons are made explicit. However, that representation was mixing state and input-output concerns. If we go a step further, we can decouple both aspects by abstracting the state away from the specification, to obtain:

```trait LensAlg[A, P[_]] {
def get(): P[A]
def set(a: A): P[Unit]
}

type Lens[A, S] = LensAlg[A, State[S, ?]]
```

So, lenses can be understood as a state-based interpretation of a particular Input/Output algebra. We can distinguish in this kind of specification between two components: the IO interface and the state transition component. Why would we want to define our lenses, or any other coalgebra, in this way? One advantage is that, once we get this representation, where input-output buttons are completely isolated, we can make machine programs that are completely decoupled from the state component, and just depend on the input-output interface. Take modify, a standard lens method, as an example:

```def modify[A, P[_]](
f: A => A)(implicit
P: LensAlg[A, P],
M: Monad[P]): P[Unit] =
P.get >>= (P.set compose f)
```

Notice that although `modify` constrains `P` to be monadic, this restriction could be different in other scenarios, as we can see with `gets`, where `Functor` is powerful enough to fulfil the programmer needs:

```def gets[A, B, P[_]](
f: A => B)(implicit
P: LensAlg[A, P],
F: Functor[P]): P[B] =
P.get map f
```

These programs are absolutely declarative since nothing has been said about `P[_]` yet, except for the fundamental constraints. Indeed, this way of programming should be pretty familiar for a functional programmer: the step that abstracted the state away led us to a (Higher Kinded) object-algebra interface, which is just an alternative way of representing algebras (as F-algebras are).

## Ongoing Work

We started this post talking about F-coalgebras, `type Coalgebra[F[_], X] = X => F[X]`, and then we turned our lens coalgebra example into a new representation where buttons and state transformation concerns are clearly identified (rather than being hidden into the functor ‘F’). Indeed, we may tentatively put forward IO-coalgebras as a particular class of coalgebras, and define lenses as follows:

```type IOCoalgebra[IOAlg[_[_]], Step[_, _], S] = IOAlg[Step[S, ?]]
type Lens[A, S] = IOCoalgebra[LensAlg[A, ?], State, S]
```

As we said in the previous section, this representation empowers us to use the existing algebraic knowledge to deal with coalgebras. So, although we started our journey aiming at the specification of machines, we were brought back to the algebraic world! So, which is the connection between both worlds? In principle, what we suggest is that coalgebras might be viewed as state-based interpretations of algebras. Now, whether any F-Coalgebra can be represented as an IO-Coalgebra is something that has to be shown. And, additionally, we should also identify the constraints in the IOCoalgebra definition that allows us to prove that the resulting formula is actually a coalgebra.

On future posts, we’ll be talking about cofree coalgebras as universal machines. As we will see, those cofree machines exploit the button intuition to simulate any other machine in different contexts. By now, we’d be really grateful to receive any kind of feedback to discuss the proposed connection between languages and machines. Hope you enjoyed reading!

Posted in algebra, coalgebra, Embedded DSLs, machine, Scala, Type Class | 2 Comments

## From “Hello, world!” to “Hello, monad!” (Part I)

This is the first instalment of a series of posts about the essence of functional programming. The only purpose of this series is to illustrate the defining features of this style of programming using different examples of increasing complexity. We will start with the ubiquitous “Hello, world!” and will eventually arrive at … (throat clearing) monads. But we won’t argue that monads are the essence of functional programming, and, ultimately, do not intend these posts to be about monads. In fact, we will stumble upon monads without actually looking for them, much in the same spirit of Dan Piponi’s “You Could Have Invented Monads! (And Maybe You Already Have.)“.

There is a major difference between Dan’s post and these ones, however: we won’t be using Haskell but Scala, a language which unlike Haskell is not purely functional, i.e. that allows us to write non-functional programs. But this feature won’t be a drawback at all. On the contrary, we think that it will allow us to emphasise some aspects (e.g. the interpreters) that may go unnoticed using a language like Haskell. Let’s start with our first example!

## Hello, functional world!

This is a possible way of writing the “Hello, world!” program in Scala:

```object Example1{
def hello(): Unit =
println("Hello, world!")
}
```

which can be run in the Scala REPL as follows:

```scala> Example1.hello()
Hello, world!
```

As you can see, when this function is run the message “Hello, world!” is printed in the console. This is called a side effect, and it was indeed the purpose of the program. But this also means that our implementation is not purely functional. Why? Because functional programming is all about writing pure functions: functions which receive some input, compute some values and do nothing else. In a strongly-typed language such as Scala, we can witness the non-functional character of some function as follows: if the function does nothing else than returning values of the type declared in its signature, then it’s a pure function; otherwise, it’s an impure function: its signature declares that it does one thing, but it also does something more behind the compiler’s back. We may thus also say that impure functions work in the black market, beyond the reach of the compiler’s type system (our best ally!).

But, if pure functions only allow us to return values, how can we then print something to the console? How can we then execute any kind of effect (read something from the database, invoke a web service, write to a file, etc.)? How can we then do something useful at all? The answer is that you can’t do those things with pure functions alone. Functional programming rests upon a basic modularity principle which tells us to decompose our applications into two kinds of modules: (1) functional modules, made up of pure functions that are responsible for computing what has to be done, and (2) non-functional modules made up of impure functions or programs in charge of actually doing it. It’s this latter kind of modules which will do the dirty job of actually interacting with the outside world and executing the desired effects; and it’s the only responsibility of the functional modules to determine which effects have to be executed. When programming functionally, you should not forget this fact: impure functions will eventually be necessary. The only thing that functional programming mandates is to segregate impure functions, so that the ivory tower where pure functions live is as large as possible, and the kitchen where impure, side-effecting functions operate is reduced to the bare minimum (which, nonetheless, in many applications might be quite large).

This limitation on the things that functional programming can do is a self-imposed constraint that doesn’t actually constrain its range of application domains. Indeed, we can apply the functional programming style to any kind of application domain you may think of. So, what about our impure “Hello, world!” program? Can we purify it? Sure we can. But then, how can we disentangle its pure and impure parts? Essentially, functional programming tells us to proceed as follows (line numbers refer to the code snippet bellow):

• First, we define a data type that allows us to describe the kinds of effects we are dealing withIn our “Hello, world!” example, we want to talk about printing strings somewhere, so we will define the `Print `data type (cf. line 8).
• Second, we implement a function which computes the particular desired effects in terms of an instance of the previous data type. Our program then will be pretty simple (line 12): it simply returns the value `Print("Hello, world!")`. Note that this function is pure!
• Third, we implement a function that receives an instance of the effect data type, and executes it in any way we want. In our example, we will implement a function that receives a `Print `value and executes a `println` instruction to write the desired message to the console (line 17). This function is thus impure!
• Last, we compose our program out of the previous two functions, obtaining, of course, an impure program (line 21).

The resulting program is implemented in the `Fun` module (pun intended):

```object Example1{

/* Functional purification */
object Fun{

// Language
type IOProgram = Print
case class Print(msg: String)

// Program
def pureHello(): IOProgram =
Print("Hello, world!")

// Interpreter
def run(program: IOProgram): Unit =
program match {
case Print(msg) => println(msg)
}

// Composition
def hello() = run(pureHello())
}
}
```

The equivalent program to our initial impure solution `Example1.hello` is the (also impure) function `Example1.Fun.hello`. Both functions has the same functionality, i.e. they both do the same thing, and from the point of view of the functional requirements of our tiny application, they are both correct. However, they are far from being similar in terms of their reusability, composability, testability, and other non-functional requirements. We won’t explain in this post why the functional solution offers better non-functional guarantees, but ultimately the reason lies behind its better modularisation: whereas the `Example1.hello `function is monolithic, its functional counterpart is made up of two parts: the `pureHello `function and the impure function `run`.

Now, an important remark concerning the above code: note that we defined the alias `IOProgram` for our effect data type, and that we used the labels LanguageProgram and Interpreter for the different parts of our functional solution. This is not accidental, and it points at the interpreter pattern, arguably the essence of functional programming:

• First, our effect data type can be regarded as the language we use to describe the desired effects of our application. As part of this language, we can have different types of single effects, such as the `Print` effect or instruction. Also, since languages are used to write programs, expressions in our effect language can be called programs, and we can use the word “program” to name the type of the overall effect language. In our case, since we are dealing with IO effects, `IOProgram` is a good name. Last, note that the purpose of our language is very specific: we want to be able to write programs that just build upon IO instructions, so `IOProgram` is actually a domain-specific language (DSL). Here there are some hand-crafted programs of our IO DSL:
```scala> import Example1.Fun._
import Example1.Fun._
scala> val program1: IOProgram = Print("hi!")
program1: Example1.Fun.IOProgram = Print(hi!)
scala> val program2: IOProgram = Print("dummy program!")
program2: Example1.Fun.IOProgram = Print(dummy program!)
```
• So, pure functions return programs: this means that functional programming is intimately related to metaprogramming! And when we say that functional programming is declarative, we mean that functional programs just declare or describe what has to be done in terms of expressions or values. To convince yourself that our `pureHello` function is really declarative (i.e. pure), just execute it on the REPL. You will see that the only thing that happens during its execution is that a new value is computed by the runtime system (note that the output that you’ll see in the REPL is not a result of the `pureHello` execution, but of the REPL itself):
```scala> Example1.Fun.pureHello()
res1: Example1.Fun.IOProgram = Print(Hello, world!)
```
• Once we execute a pure function and obtain a program, what is left is to actually run that program. But our program is an expression, pure syntax, so we have to choose a particular interpretation before we can actually run it. In our case, we chose to interpret our `IOProgram`s in term of console instructions, but we are free to interpret it otherwise (file IO, socket, etc.). When you run the program computed by our `pureHello` function, you will actually see the intended side effects:
```scala> Example1.Fun.run(Example1.Fun.pureHello())
Hello, world!
```

So, this is basically the structure of functional programming-based applications: domain-specific languages, pure functions that return programs in those DSLs, and interpreters that execute those programs. Note that these interpreters may be implemented in such a way that our programs are not directly executed but are instead translated to programs of some lower-level intermediate language (in a pure way). Eventually, however, we will reach the “bare metal” and we will be able to observe some side effect in the real world.

The IO programs that we are able to express with our current definition of the type `IOProgram` are very simple. In fact, we can just create programs that write single messages. Accordingly, the range of impure functions that we can purify is pretty limited. In our next posts, we’ll challenge our IO DSL with more complex and realistic scenarios, and will see how it has to be extended in order to cope with them.

Edit: All code from this post can be found here.

Posted in functional programming, Scala | Tagged , | 3 Comments

## The Speech Console

During these months, we have tried to explain Speech using different strategies and metaphors, with varying results. For instance, we have defined Speech as

“A process-oriented programming language”

or as

“A DSL for programming the business logic of social applications”

These definitions are precise and correct, but not very effective. In fact, the common reaction from the audience to these definitions is “WAT” 😉

Juan: In this post, I will try to shed some light on these definitions with the help of … THE SPEECH CONSOLE!

You: wat

Well, let’s try it.

## What is the Speech Console?

Think of chat clients, such as IRC clients, instant messengers, etc. These tools allow people to communicate about any topic, almost in real-time. The Speech console can be thought of a chat client, in the sense that its purpose is essentially to allow users to communicate and engage in conversation. And, consequently, you can similarly think of the speech virtual machine as a kind of chat server which enables and mediates these communications. In fact, if we launch the Speech server with the minimum configuration possible, what we obtain is something similar to a chat server. The following program, which you can find in the Getting started section of the speechlang.org site, precisely do that:

```object Main extends App{
object Test extends org.hablapps.speech.web.PlainSystem
Test.launch
}
```

Running this program will make the Speech server available at the localhost:8111 default address. Then, you can point the browser to the URL localhost:8111/console to get access to the Speech console. The following video illustrates this process and a sample session with two users.

## The Speech Server as a structured communication infrastructure

But the Speech Virtual Machine is much more than a simple chat server, of course. Even with its minimal configuration, the Speech server allows users to structure their communications around a hierarchy of interaction contexts (akin to chat rooms), and their activity in terms of a hierarchy of roles played within those interactions. Thus, besides saying arbitrary things, the previous video showed how users can set up new interaction contexts, join and assign other users to them, say things within those contexts, and, eventually, leave and close the interactions.

The set upclose, join, leave, etc., message types are standard declarations provided by Speech that allow users to modify the interaction space. Within the Speech console, you can get help on these commands by typing “help say”. But, besides saying things, users can also see what is happening (e.g. which interactions are taking place? which roles do I play?, etc.). Typing “help see” will give you explanations on how to observe the particular way in which interactions are structured within the Speech server at a given moment.

## Speech as a language for programming a communication infrastructure

But the major difference between the Speech server and a simple communication infrastructure is not its ability to hold structured conversations, but the fact that it can be programmed. To understand in which sense the Speech server can be programmed, note that the previous minimum configuration represents a state of anarchy: people can say what they want, and structure their conversations the way they like; moreover, there are no rules: someone may set up a new interaction, and this interaction could be closed immediately by any other one.

Now, think of the way people communicate in a given context. First, the shape of interactions and roles that people play, as well as the types of things they say, are commonly constrained to certain types. And the things that they can say, see and do depend on the kind of role they play, as well as on the specific circumstances in which they attempt to do it. So, people’s interactions are commonly shaped and ruled … at least to some extent, and a Speech program precisely encodes this rules so that they can be interpreted at runtime by the Speech virtual machine.

## Simulating Twitter interaction through the Speech console

For instance, which are the constraints imposed by twitter on user interaction? which norms are enforced? Basically, interactions within the Twitter community are shaped around member accounts and lists; interacting users can only be guests or registered tweeters, who can be followers of other tweeters and be listed; concerning the things they say, guests can only set up new accounts, whereas tweeters can only tweet messages whose lengths are constrained to 140 chars, re-tweet others’ messages, join other tweeter accounts as followers (i.e. follow other users), etc. Concerning norms, tweeters can only follow other users if they are not blocked by them; tweets issued within some account are automatically notified to their followers; etc. These and other norms and rules are part of the specification of Twitter as a communication network, and these norms and types of interactions, roles and message types, can easily be programmed in Speech.

Let’s suppose that the org.hablapps.twitter.Program  trait implements these types and norms; then, you can tell the Speech server that it must manage user interaction according to the structure and rules of twitter, with a simple mixin composition:

```import org.hablapps.{ speech, twitter }
object TwitterWeb extends App {
object System extends speech.web.PlainSystem with twitter.Program {
System.launch
}
```

Once you execute this program, you can launch the Speech console and test different twitter scenarios. The following video shows the following one:

• First, a new twitter community is created (named “habla”)
• Then, a guest user enters the community and decides to register himself as a Twitter user (@serrano)
• Another twitter account is created for tweeter @morena; this time the account is private
• @morena follows @serrano, so that she receives whatever serrano tweets within his account
• @serrano attempts to follow @morena, but her account is private, so the “following” declaration is kept pending for approval
• @morena allows @serrano to follow her
• Eventually, @morena decides to unfollow @serrano and she also fires his follower role within her account

In sum …

In the light of all this, what can we say about the purpose and proposition value of Speech? First, the Speech console shows that Speech is only suitable for programming software which aim at managing user interaction. More specifically, Speech allows programmers to implement the structure and rules that govern the interactions of users within the application domain, i.e. the joint activities (or processes) carried out by people within a given social context. Thus, we can think of Speech as a (social) process-oriented programming language. Moreover, Speech is not a general purpose language but a domain specific language: in fact, Speech is a language that allows us to program a structured communication infrastructure, not a general purpose machine. Last, since the business logic of social applications largely deal with the kind of interaction requirements addressed by Speech, it can be described as a DSL for programming the business logic of social applications.

Why should we program social applications in Speech? First, because programmers don’t have to start from scratch but from a programmable communication infrastructure (i.e. less code needed); and, second, because the Scala embedding of Speech allows us to implement the structure and rules of interactions in a very concise and understandable way. We strive to achieving minimum levels of accidental complexity, so that functional requirements can be implemented in the most direct way possible. In next posts, we will tell you about new improvements in the Speech embedding.

Have a great holidays!

Posted in Apps, Speech, Web console | Leave a comment

## Sample Speech Apps: Twitter, Trac, Do&Follow … and Big Brothapp!

Which kinds of applications are most suitable for Speech? How do we design a Speech app? How do we implement a Speech design using its Scala embedding? To answer these questions in a pleasant and entertaining way, we completely re-designed our web page and chose four applications in quite different domains for illustrating the virtues of our DSL. The whole Habla Computing team has been involved in the design and implementation of these apps. Here they are!

We all know Twitter. It’s a micro-blogging platform which allows people to publish short-message updates. Can we program the structure of Twitter interactions and its communication rules in Speech? Of course! Accounts, tweeters, tweets, statistics, and so forth, can be very easily mapped into social interactions, agent roles, speech acts and information resources  – the different kinds of social abstractions put forward by Speech. “Following” rules for private accounts or blocking users can also be expressed very easily using empowerment and permission rules. Monitoring rules can also represent communication rules (e.g. forwarding tweets to your followers) and notification preferences (e.g. being informed that you’ve got a new follower) straightforwardly.

Do&follow up is a task-based management application which allows project responsibles to assign tasks to people, monitor progress, attach information resources to each ongoing task, launch automatically new tasks according to a pre-established workflow, etc. It belongs to a completely different domain from Twitter … isn’t it? Not really. Indeed, both workflow management software and social networks ultimately deal with people and their needs for communication and collaboration. The differences between them simply lie in the way communications are structured and governed. Thus, we can model tasks and their responsible people as particular types of social interactions and agent roles, respectively, and represent workflows through life-cycle initiation and finalisation rules.

The Trac application is an issue tracking system for software development projects. It’s somewhat similar to the do&follow up application, but you won’t find a pre-established workflow here. The trac application is interesting from the point of view of Speech because it allows us to illustrate its ability to support both structured and unstructured communication modalities: on the one hand, issues go through a number of well-defined stages and follow specific rules for assignments and completion; on the other, the owner, reporter and arbitrary members of the application can engage in free conversation concerning an issue. Again, it’s all about about speech acts with different normative weights and constraints.

The Big Brothapp prototype was designed after the rules of the big brother reality game show. The purpose of the application is to assist the producers in the management of the TV show, particularly by enabling a computational representation of the house, the eviction processes and any other aspect which belongs to the “rules fo the game”. These rules can be accommodated in a very convenient way by Speech, so that that the structure and dynamics of the contest can be formalised with empowerment and life-cycle rules, standard speech acts, etc., very easily!

### Watch the design and coding video guides!

We used Big Brothapp to illustrate the Speech design and coding processes through a series of videos that you can find in our web page. The apps section also shows videos for the other applications, as well as their source code on github. If you have any comments, questions or wonder how you preferred app could be modeled in Speech, please tell us!

Last, you will also find in the apps section direct links to runtime deployments of these apps through their respective Consoles – but this will be the subject for the next post. Enjoy!

Posted in Apps, Speech | Leave a comment

## Never a “Hello, World!” was so real

As promised, here is our “Hello, World!” example (also in PDF). This kind of program is illustrative to show the very basics of a programming language. Usually, the program consists on showing the string “Hello World!” message in a console. However, Speech does not know much about consoles or strings. Instead, it is really good at representing social processes and the rules governing them. Therefore, we will be helping God in order to fulfill his functional requirement list: create the world, create the human being and empower him to say something. So please, be open-minded to meet Speech!

Before you leave: we hope to see you in the next delivery, where we will be showing you how to program Twitter in Speech, a real application that fits perfectly with our language.

Posted in Uncategorized | Leave a comment

## Speech 0.1 released!

We are happy to announce the first release of the Speech interpreter. This is a beta release with the minimum functionality required to test significant application examples, demonstrate the virtues of the Speech DSL and … receive feedback from you! You can download the interpreter from the Speech Community Portal:

There you will also find instructions to run the interpreter, a short presentation of Speech, a first version of the user guide to Speech programming in Scala, and links to prototype applications. Parts of the Speech interpreter were already open sourced (particularly, the updatable package), and we aim at publishing other major modules of the interpreter in the near future.

Admittedly, documentation is far from being complete, so our purpose now is mainly showing you what a Speech program looks like. We expect to give you soon a full-fledge documentation including the Speech API and a design guide in order to allow you to become a proficient Speech programmer. In the meantime, we have planned a series of blog posts focusing in different application domains. And the first program that we will use to exemplify Speech will be … the “Hello, world!” program, of course. I hope you like it!

Posted in Uncategorized | Leave a comment

## Macros and Reflective Calls to eliminate boilerplate

In our previous post, we told you about updatable, a library that empowers programmers to build and update immutable objects in generic contexts. We saw the builder macro as a main element in the library, but we did not explain in detail how it was implemented. We think it uses an interesting pattern to eliminate boilerplate, so we want to share it with you. Instead of showing the original updatable builder, we are going to use a reduced version, in order to keep the example small. We call it factory, because its unique aim is to instantiate traits. Now let’s get to work!

Even in Scala, there are situations where we can find annoying boilerplate. That could be the case of the following lines:

```trait A {
val a1: Int
val a2: String
}

case class AImpl(a1: Int, a2: String) extends A

AImpl(a1 = 0, a2 = "")

trait B extends A {
val b1: Double
}

case class BImpl(a1: Int, a2: String, b1: Double) extends B

BImpl(a1 = 3, a2 = "", b1 = 3.0)
```

The case class implements the trait and creates an object factory (among many other things). There is some boilerplate in this implementation that would be nice to eliminate: concretely, the redundant argument list that conforms the constructor. This might be a potential problem if the number of attributes grows excessively. Our approach to eliminate this boilerplate consists on using macros as follows:

```trait A { ... }

val A = builder[A]

A(_a1 = 0, _a2 = "")

trait B extends A { ... }

val B = builder[B]

B(_a1 = 3, _a2 = "", _b1 = 3.0)
```

Scala 2.10 macros are limited in the creation of new types (a limitation that has been lifted with type macros in the macro paradise project), and can only return expressions. So in order to instantiate objects of types A and B we have to exploit anonymous classes. This instances will be returned by the apply method of the object returned by the builder macro.  But you may wonder how is it possible to get these invocations working, since the apply method signature seems to be variable in each case. In fact, there are at least two possible solutions to this problem: either returning a new object of a structural type that declares a custom apply method or simply returning an anonymous function of the proper type. We have chosen here the former alternative in analogy with the way in which the updatable builder is implemented (where you can find additional services besides the factory method). Thus, the code that should be generated by the macro is shown in the next snippet:

```// builder[A]
new {
def apply(_a1: Int, _a2: String): A = new A {
val a1 = _a1
val a2 = _a2
}
}

// builder[B]
new {
def apply(_a1: Int, _a2: String, _b1: Double): B = new B {
val a1 = _a1
val a2 = _a2
val b1 = _b1
}
}
```

Now, it is time for us to analyze the macro implementation. Since we are going to generate dynamic code – mainly in the apply’s argument list – it is not feasible to exploit the reify macro – which allows the programmer to create the returning expression in a natural way. So, one could choose to use either parse or manually create the AST. The latter one is discouraged by the macro author because the code turns pretty verbose. In the current case, if we had used the raw style, the number of lines would have grown remarkably. The reason why this huge growth happens is because creating trait instances produces very complex trees. Nevertheless, we should consider the AST version if we aim to optimize macro execution timings. Next, the macro implementation is shown:

```  def builder[T] = macro builderImpl[T]

def builderImpl[T: c.WeakTypeTag](c: Context) = {
import c.universe._
import c.mirror._

implicit class SymbolHelper(sym: Symbol) {
private def cross(t: Type): Type = t match {
case NullaryMethodType(inner) => inner
case _ => t
}
def name: String = sym.name.encoded
def tpe: Type = cross(sym.typeSignature)
def isAccessor: Boolean = sym.isTerm && sym.asTerm.isAccessor
}

implicit class TypeHelper(tpe: Type) {
def name: String = (tpe.typeSymbol: SymbolHelper).name
def accessors: List[Symbol] = tpe.members.toList.reverse.filter(sym =>
sym.isAccessor)
}

def buildObject = {

val tpe = weakTypeOf[T]

def instanceTrait = {
val vals = tpe.accessors.foldLeft("")(
(s, sym) => s + s"val \${sym.name} = _\${sym.name}\n")
s"new \${tpe.name} { \$vals }"
}

def applyArguments = tpe.accessors map { sym =>
s"_\${sym.name}: \${sym.tpe}"
} mkString ","

s"""
new {
def apply(\$applyArguments): \${tpe.name} = \$instanceTrait
}
"""
}

c.Expr[Object](c.parse(
s"""
{ val aux = \$buildObject; aux } // SI-6992
"""
))
}
```

First, it is important to notice that neither the macro implementation nor the definition declare the result type. Also, note that the parameter type of the Expr object returned by the macro is simply Object. Thus, we let the compiler infer the proper type of the factory object returned by the macro. Concerning the implementation, we find two main areas in the previous code: reflection tasks and tree creation tasks. The first ones are owned by SymbolHelper and TypeHelper implicit classes, which extends Symbol and Type, respectively. The accessor concept makes reference to the methods that permit the programmer to access the trait values. In the factory’s case they have a direct correspondence with the apply’s argument list. With respect to the creation tasks, as we said before, the parse method seems to be the better alternative in this case to generate trees. To invoke it, we need a string containing the instructions to reify. That is the buildFactory‘s duty, which uses string interpolation to format the code that will be finally expanded. This Scala’s fresh feature notably improves the instruction’s readability.

Currently, we are experimenting with some new ideas to make the factory (and therefore updatable) better. Mainly, we would like to generate a parameterized apply method in those situations when the attribute type is abstract:

```trait C {
type C1
val c1: C1
}

val C = builder[C]

C[Int](_c1 = 3)
```

We will tell you about this and other extensions in following posts.

To sum up, any programmer knows that boilerplate is not funny at all. In order to avoid lots of dangerous copy-paste actions, external pre-compilation tools may also be employed to solve the issue. However, by doing so, we are adding unnecessary complexity in the project. Today, we have shown how to use macros to face the problem, with a native feature! We have applied it to develop the Speech DSL. How are you planning to use it?

Posted in case classes, Embedded DSLs, immutability, Macros, Scala | Leave a comment

## Updating immutable objects in generic contexts

Immutability is one of the hallmarks of functional design, and writing idiomatic programs in Scala highly relies on manipulating immutable objects. Now, if we don’t have mutable fields (aka vars) … how can we update objects in a convenient way? Scala provides so-called case classes which have a copy method with the required functionality. And we can also use lenses, a higher-level abstraction that you can find in popular Scala libraries such as scalaz and shapeless (you can find a macro-based implementation in the macrocosm project as well). Nevertheless, all these implementations build some way or another upon case classes as the basic updating mechanism.

Now, sometimes writing case classes for your specification traits is cumbersome, since it involves a lot of boilerplate. And this problem is specially exacerbated in the presence of inheritance hierarchies, where traits get also polluted with getters and setters. Wouldn’t it be nice if we found some way of automatically deriving case classes and eliminating all this boilerplate? Well, this is a question for macros, and type macros, in particular. But type macros are still a pre-release feature of Scala. So, what can be done with def macros alone? We have developed a library that exploits def macros in combination with reflective calls to eliminate the need of writing implementation classes. And it allows the programmer to update immutable objects in generic contexts with a minimum overhead. This library is called org.hablapps.updatable and you can find it on GitHub. Before explaining its functionality, though, let’s illustrate the problem with a simple example, and let’s solve it using case classes.

### The problem …

We will illustrate the kind of updating problem we have to deal with by considering a design problem in the implementation of Speech itself. Among other things, our DSL offers to programmers an abstract layer which implements generic types and state transformations that can be reused across any kind of social domain. For instance, the layer includes interaction contexts and agent roles, and the play transformation which adds a new agent role within some context. We want to implement interaction contexts and agent roles as immutable objects and be able to reuse the play transformation as-is, across any application domain. For instance, think of Twitter: there you find accounts, followers, tweeters, and many other concepts. We can think of accounts as the contexts where tweeters interact with their followers; and following someone would involve playing a new follower role within their account. As another example, think of courses as contexts of interaction for student and teacher agents, and some student enrolling some course: this action can also be implemented with the help of the play action.

### … Solved using case classes

Our design problem can be understood as a particular example of the family polymorphism problem, which can be easily solved in Scala using abstract types and the cake pattern. Accordingly, the Speech abstraction layer can be understood as a family of types which vary together covariantly in each application layer. In particular, our implementation will be structured in three basic layers:

• An abstract layer (the Speech layer) which provides generic implementations of interactions contexts, agents, and generic transformations, in terms of traits and generic methods.
• An application layer which provides specific implementations of domain-dependent concepts in terms of traits that extends the corresponding generic traits.
• Another application layer which provides the implementation of domain-dependent traits, in terms of case classes.

The following snippet represents an implementation sketch of the first layer:

```trait Speech {
trait Interaction[This <: Interaction[This]] { self: This =>
type Member <: Agent[Member]
def member: Set[Member]
def member_=(agent: Set[Member]): This
}

trait Agent[This <: Agent[This]] { self: This =>
}

def play[I <: Interaction[I]](i: I)(a: i.Member): I =
i.member = i.member + a
}
```

Here, the Speech layer just implements two traits for the Interaction and Agent types, as well as the play transformation. Note that the play method must work for any type of interaction and agent, and we don’t want to forget the exact type of interaction once we call the method. Hence, the method is parameterized with respect to some interaction type I. Now, the agent to be played within that context must be compatible with the interaction type, i.e. we can play followers within Twitter accounts, but not students. To account for this constraint, we declare an abstract type Member in the Interaction trait and exploit dependent types in the play signature. How do we add the new member agent? We need a setter, of course. And this setter must also return the specific type of the interaction (again, to avoid type information loss). For that purpose, the trait is parameterized with the This parameter, following the standard solution to this problem. Last, note the updated sentence in the play method: it’s as if member was a var. But it’s not, it’s simply that we named the getter and setter according to the var convention.

How do we reuse this abstract layer? The following snippet uses the Twitter domain to illustrate reuse of the Speech layer.

```trait Twitter extends Speech {

trait Account extends Interaction[Account] {
type Member = Follower
}

def Account(members: Set[Follower] = Set()): Account

trait Follower extends Agent[Follower] {
}

def Follower(): Follower
}
```

The Twitter layer simply extends the Speech traits and sets the abstract members to the desired values. Of course, a real implementation will include additional domain-dependent attributes, methods, etc., to the Account and Follower traits (think of the Speech member attribute as a kind of standard attribute). Note that we also included factory methods for the Account and Follower types. In a real implementation, it is more than likely that we will need them. And we don’t want to commit to any specific implementation class, so we declare them abstract. The next portion of the cake will provide the implementations of the Twitter types – using case classes:

```  trait TwitterImpl { self: Twitter =>

private case class AccountClass(member: Set[Follower]) extends Account {
def member_=(agent: Set[Follower]) = copy(member = agent)
}

def Account(members: Set[Follower] = Set()): Account = AccountClass(members)

private case class FollowerClass() extends Follower {
}

def Follower(): Follower = FollowerClass()
}
```

Now, this is the “ugliest” part: we had to provide case classes for all the application traits, and the getters/setters for all of their attributes (standard and non-standard). In this simple example, we just have the “member” attribute, but we may have dozens in a real implementation. This implementation layer must also provide implementations for factory methods, which happen to be the only way to create new entities (note the private declaration of case classes).

The following snippet exercises the above implementation:

```  object s extends Twitter with TwitterImpl
import s._

val (a, f1, f2) = (Account(), Follower(), Follower())

// test _=
assert((a.member = Set()).member == Set())
assert((a.member = Set(f1, f2)).member == Set(f1, f2))

// test play
val a1 = play(a)(f1)
assert(a1.member == Set(f1))
```

## … Solved using the org.hablapps.updatable package

The major structural change to the above implementation is that we don’t need the case class layer. Thus, we may qualify the following implementation as trait-oriented. Let’s see how the Speech and Twitter layers are modified:

```trait Speech {
trait Interaction {
type Member <: Agent
val member: Set[Member]
}

implicit val Interaction = weakBuilder[Interaction]

trait Agent {
}

implicit val Agent = weakBuilder[Agent]

def play[I <: Interaction: Builder](i: I)(a: i.Member): I =
i.member := i.member + a
}
```

The first noticeable change is that … we don’t need getters and setters! We just declared our attributes using vals. And the implementation of the play method has not been excessively complicated: we just substituted the “=” operator for the new operator “:=”, and included through its signature evidence that the type parameter I has an implementation of the Builder type class. Instances of this type class can be understood as factories that allow programmers to instantiate and update objects of the specified type in a very convenient way. In particular, the Builder type class enables an implicit macro conversion which gives access to the “:=” operator. All this in a type-safe way.  In a sense, builders play the same role as case classes played in the previous implementation. But there is a crucial difference: builders are created automatically through the builder macro, as shown in the following snippet of the second layer:

```  trait Twitter extends Speech {
trait Account extends Interaction {
type Member = Follower
}

implicit val Account = builder[Account]

trait Follower extends Agent {
}

implicit val Follower = builder[Follower]
}
```

The only difference in this layer with respect to the case class implementation is that no method factories are needed, since builders play that role. Now, if you come back to the previous snippet you will also notice weakBuilder invocations for types Interaction and Agent. Certainly, we don’t need strict builders for these types, since they are “abstract”. However, builders also provide attribute reifications, and we certainly want an unique reification for the member attribute. The weakBuilder macro generates the corresponding reifications. The following snippet shows how to access reified attributes, and mimic the functionality included in the case class implementation.

```object s extends Twitter
import s._

// test reifications
assert(Account.attributes == List(Account._member))

// create instances
val (a, f1, f2) = (Account(), Follower(), Follower())

// test _=
assert(a.member == Set())
assert(((a.member += f2).member -= f2).member == Set())

// test play
val a1 = play(a)(f1)
assert(a1.member == Set(f1))

println("ok!")
```

Note that the factory method provided by the Account builder include default parameters as well. These default parameters are defined through the Default type class. The companion object of this type class comes equipped with default values for common Scala types, but you can also provide default values for your own specific types. As you can see, the default value defined for types Set[_] is the empty set.

Concerning the rest of the snippet, we also illustrated the use of the ‘+=’ and ‘-=’ operators. Basically, these operators allow the programmer to specify updates of multivalued attributes specifying only just the element to be added or removed to the collection. To be able to use these operators, the type constructor of the attribute type must implement the Modifiable type class. Currently, the updatable package offers modifiable instances for Option and any kind of Traversable.

## … But be careful with non-“final” attributes

Let’s suppose that we changed slightly the signature of the play method:

```def playAll[I <: Interaction: Builder](i: I)(ags: Set[i.Member]): I =
i.member := ags
```

Is this type-safe? Certainly not, since the actual type may have refined the member attribute to a proper Set subtype. For instance, actual type may have overridden the member declaration to a ListSet, while actual argument ags may be a HashSet. The source of this problem is that the member attribute is not “final”, in the sense that it can be overridden. We will consider an attribute as “final” if every component which is part of its declared type is a final class or refers to an abstract type.

We may have forbidden non-final attributes to be used as part of update sentences, but this would rule out the above implementation of the play method, which is perfectly safe: in that case, there was no problem because the ‘+’ operator is defined by the different subtypes of the trait Set. So, we ended up deciding to just emit a warning if non-final attributes are used by updating sentences in a generic context. If you want to eliminate that warning, you can always make the attribute declaration final with the help of new auxiliary abstract types. For instance, look at the following snippet: the member declaration now refers to a new abstract type MemberCol[_], which forces us to change the declaration of the playAll method in such a way that the actual type of the attribute must now be taken into account.

```trait Interaction {
type MemberCol[x] <: Set[x]
type Member <: Agent
val member: MemberCol[Member]
}

def playAll[I <: Interaction: Builder](i: I)(ags: i.MemberCol[i.Member]): I =
i.member := ags
```

UPDATE: the above snippet has been changed to fix a mistake detected by Eugene Burmako. Thanks Eugene!

## In hindsight …

We spent a considerable amount of time in the design and implementation of the updatable package, but it was worth it. The Speech layer is populated by several abstract types with dozens of standard attributes, and making the application programmer to provide getters and setters for them, for each of the application types, is a tough work.

But we also found the updatable library useful for other parts of the Speech platform: for instance, we exploit it to facilitate the serialization of JSON objects, so that we can automatically generate a serializer for buildable types (i.e. instances of the Builder type class). We will tell you about this and other applications of the updatable package in following posts, paying particular attention to macro issues.

But there is still a lot that could be done … besides fixing bugs, of course ;). For instance, we currently require traits to have all of its type members defined in order to generate a builder for it, and it would be nice to relax this constraint. Also, we may extend the updating operator := to cope with nested updates (similarly to what you can achieve with lenses). And we may add support for Union-like types, try to use type macros, etc. We warmly welcome any comment, suggestion for new functionality, corrections, … and any other kind of help. Enjoy it!

Posted in case classes, immutability, Macros, Scala, Speech, Type Class, Updatable | 1 Comment

## Welcome message

Think of information systems, Web 2.0 apps, games, e-learning, e-commerce, and the rest of e-* applications. Certainly, these application domains differ significantly in several respects, but can we find some commonalities? We do think so: they deal directly with people; they are deeply concerned with their needs for communication, collaboration and coordination; they all can be regarded as social apps. If we have a look to their functional requirements, we will invariably find people playing different roles, saying things to one another, seeing what is happening, generating and consuming information, and so forth; moreover, we will also notice how the application must take into account different normative concerns: permissions to do something, commitments endorsed by particular role players, monitoring rules, etc.

How are these applications developed nowadays? Programmers mainly use object- and functional-oriented programming languages, such as Java, C#, Scala, Haskell, etc. Yet, functions and objects are quite apart from the kinds of social abstractions we just mentioned above. Similarly, factories, facades, monads, etc., are alien to domain experts and non-IT people. The likely result: a daunting amount of accidental complexity leaked into the code. Clearly, if we want to strive for purely functional code, i.e. software which directly encode the functional requirements of the application, we have to raise the level of abstraction supported by the programming language.

Towards social-oriented programming …

In Habla Computing we are building Speech, a domain-specific programming language for implementing the domain logic of social apps. Speech offers the programmer computational counterparts of social concepts such as roles, speech acts, interaction contexts, commitments, permissions, and the like, which aim at closing the gap between natural-language functional requirements and executable code. The net result of this higher-level of abstraction is a drastic reduction in lines-of-code, development times and cost, as well as a significant increase in quality. Of course, we are not alone in this quest for functional purity: the current landscape abounds with domain-specific proposals that aim at simplifying the implementation of business processes, vertical social networks, etc.

Two features, however, distinguish Speech from BPMS, social networking engines, and other domain-specific technologies. First, Speech is not confined to niche domains; do your functional requirements deal directly with people? If so, you can profit from Speech, regardless of the kind of social process supported by the application. Second, Speech is a language for programmers; it is neither a modeling language for business analysts, nor a suite of tools that cope with each functional concern separately. Certainly, we aim at bridging the gap between non-IT people and programmers, and we claim Speech designs to be directly understandable by functional experts. But programmers need a self-sufficient, cohesive and expressive language. Thus, besides being a domain-specific technology, Speech has been designed as a programming language which takes into account every functional concern in a modular and cohesive way. Moreover, in order to foster adoption of Speech in the programming language community, we’ve decided to offer Speech as an embedded DSL, rather than as a stand-alone implementation.

… embedded in Scala!

There are several very good languages which enable an embedded implementation strategy, but we finally chose Scala. We especially like its mix of functional and object-oriented features, and love its new experimental ones: macros turned out essential for us. This blog will allow us to unveil the major challenges we faced in implementing Speech, and how Scala helped us to solve them. We strive for functional-programming purity in our codebase, so the title of this blog also appeals to that leitmotiv. In this regard, topics that will eventually arise include: coping with updates of immutable objects within generic contexts; dealing with references to evolving immutable objects; etc. Some of these issues lead us to develop general-purpose libraries that we’ve planned to open-source. We are open to suggestions and recommendations for improvements, and any other form of collaboration. Your feedback is really important to us.

We will also use this blog to announce the release of our products. We are working against the clock to launch the beta release of the Speech interpreter in the following weeks. So, we expect to contact you again very soon. Finally, this blog will also give us the opportunity to advertise our participation in different events. One of these events will take place on 10th-13th April at Cambridge, UK: join us at Code Generation 2013!

Posted in Embedded DSLs, Habla Computing, Scala, Speech | Leave a comment