An inefficient Haskell implementation of the Integer Partitioning problem

This is my first shot at implementing a solution for the integer partitioning problem in Haskell. It is a very inefficient algorithm, and it would probably either hang forever or crash with a core dump beyond, say, 20, but it is a first step in an efficient implementation of this problem which appears to be egregiously difficult to implement in idiomatic Haskell as compared to say, Java.

This implementation also generates the partitions in lexicographic order.

Here is the code:

module Partitions where
isort :: Ord a => (a -> a -> Bool) -> [a] -> [a]
isort _ [] = []
isort f (x:xs) = insert f x (isort f xs)
                   where
                     insert _ x [] = [x]
                     insert f x (y:ys) | f x y = x : (y:ys)
                                       | otherwise = y : insert f x ys
lexical :: Ord a => [a] -> [a] -> Bool
lexical [] [] = True
lexical [] ys = True
lexical xs [] = True
lexical (x:xs) (y:ys) | x > y = False
                      | x < y = True
                      | otherwise = lexical xs ys
clean :: Ord a => [[a]] -> [[a]]
clean xss = h (map (isort (<)) xss) []
      where
        h [] acc = acc
        h [xs] acc = if elem xs acc then acc else (xs:acc)
        h (xs:xss) acc = if elem xs acc then h xss acc
                         else h xss (xs:acc)
partitions :: Int -> [[Int]]
partitions 0 = []
partitions n = isort lexical $ clean $ f start [start]
               where
                 start = replicate n 1
                 f [_] acc = acc
                 f xs acc = f (normalise xs) (acc ++ cs) ++ clean (concat (map (\ys -> f ys []) cs))
                   where
                     normalise [] = []
                     normalise [x, y] = [x+y]
                     normalise (x:xs) = x : normalise xs
                     combinations [] = []
                     combinations [x] = [[x]]
                     combinations xs = g xs (length xs – 2)
                       where
                         g xs s | s < 0 = []
                                | otherwise = [take s xs ++ [xs!!s + xs!!(s+1)] ++ drop (s+2) xs] ++
                                  g xs (s-1)
                     cs = clean $ combinations xs

 

Running some sample tests:

$ ghci
GHCi, version 8.0.2: http://www.haskell.org/ghc/
Prelude> :l Partitions.hs
[1 of 1] Compiling Partitions       (Partitions.hs, interpreted )
Ok, modules loaded: Partitions.
*Partitions> partitions 0
[]
*Partitions> partitions 4
[[1,1,1,1],[1,1,2],[1,3],[2,2],[4]]
*Partitions> partitions 9
[[1,1,1,1,1,1,1,1,1],[1,1,1,1,1,1,1,2],[1,1,1,1,1,1,3],[1,1,1,1,1,2,2],[1,1,1,1,1,4],[1,1,1,1,2,3],[1,1,1,1,5],[1,1,1,2,2,2],[1,1,1,2,4],[1,1,1,3,3],[1,1,1,6],[1,1,2,2,3],[1,1,2,5],[1,1,3,4],[1,1,7],[1,2,2,2,2],[1,2,2,4],[1,2,3,3],[1,2,6],[1,3,5],[1,4,4],[1,8],[2,2,2,3],[2,2,5],[2,3,4],[2,7],[3,3,3],[3,6],[4,5],[9]]
*Partitions> partitions 12
[[1,1,1,1,1,1,1,1,1,1,1,1],[1,1,1,1,1,1,1,1,1,1,2],[1,1,1,1,1,1,1,1,1,3],[1,1,1,1,1,1,1,1,2,2],[1,1,1,1,1,1,1,1,4],[1,1,1,1,1,1,1,2,3],[1,1,1,1,1,1,1,5],[1,1,1,1,1,1,2,2,2],[1,1,1,1,1,1,2,4],[1,1,1,1,1,1,3,3],[1,1,1,1,1,1,6],[1,1,1,1,1,2,2,3],[1,1,1,1,1,2,5],[1,1,1,1,1,3,4],[1,1,1,1,1,7],[1,1,1,1,2,2,2,2],[1,1,1,1,2,2,4],[1,1,1,1,2,3,3],[1,1,1,1,2,6],[1,1,1,1,3,5],[1,1,1,1,4,4],[1,1,1,1,8],[1,1,1,2,2,2,3],[1,1,1,2,2,5],[1,1,1,2,3,4],[1,1,1,2,7],[1,1,1,3,3,3],[1,1,1,3,6],[1,1,1,4,5],[1,1,1,9],[1,1,2,2,2,2,2],[1,1,2,2,2,4],[1,1,2,2,3,3],[1,1,2,2,6],[1,1,2,3,5],[1,1,2,4,4],[1,1,2,8],[1,1,3,3,4],[1,1,3,7],[1,1,4,6],[1,1,5,5],[1,1,10],[1,2,2,2,2,3],[1,2,2,2,5],[1,2,2,3,4],[1,2,2,7],[1,2,3,3,3],[1,2,3,6],[1,2,4,5],[1,2,9],[1,3,3,5],[1,3,4,4],[1,3,8],[1,4,7],[1,5,6],[1,11],[2,2,2,2,2,2],[2,2,2,2,4],[2,2,2,3,3],[2,2,2,6],[2,2,3,5],[2,2,4,4],[2,2,8],[2,3,3,4],[2,3,7],[2,4,6],[2,5,5],[2,10],[3,3,3,3],[3,3,6],[3,4,5],[3,9],[4,4,4],[4,8],[5,7],[6,6],[12]]

 

An explanation will be provided when the efficient solution has been implemented!

An absolute beginner’s guide to folding in Haskell

(This is a small write-up I did on a forum for a Haskell MOOC).

foldr and foldl are simply means of reducing/accumulating/folding over values of a sequence into a single value. That’s basically it!

To make it more concrete, let’s analyse a specific implementation of foldr and foldl for Lists:

The easier one to understand is actually foldl since it is more intuitive and natural:

foldl :: (b -> a -> b) -> b -> [a] -> b
foldl f acc [] = acc
foldl f acc (x:xs) = foldl f (f acc x) xs

The first thing to do is look at the type. How does it read? In plain English, you can read that as “foldl is a function that takes three arguments: the first argument is a function that takes a value of type b and a value of type a and produces a value of type b, the second argument is a value of type b (this is the accumulator), and the final argument is a list of values of type a. The overall result is a single value of type b”. Makes sense?

With that intuition, let’s look at an example and map that to the definition itself:

Suppose you have a list, xs = [1, 2, 3, 4, 5] and you want to find the sum of all the elements of this list. Then you basically want to reduce the list into a single element under addition, right? So you can define the whole operation as: foldl (+) 0 xs.

Now see how that maps to the definition. So the first argument to foldl, which is a function, is +, and this makes sense since + is a binary function that takes two numerical values and produces their sum. The second argument, which is the “accumulator” (basically the one which keeps accumulating the results as we traverse the list) and we want to start off with 0 here since that is the identity for addition. Finally, the final argument is the list of values itself, xs.

Now look at the body of foldl. It has two patterns to match the inputs against:

1). foldl f acc [] = acc

So when we run out of values in our input list, we simply return the accumulator, which had been dutifully collecting the running sum of the elements of the list. So in our case, this would be analogous to something like foldl (+) acc [] = acc.

2). foldl f acc (x:xs) = foldl f (f acc x) xs

This is the more interesting case. Translating it to our example, we might have something like foldl (+) acc (x:xs) = foldl (+) (acc + x) xs. This is the crucial part – note the acc + x part. In the recursive call, the next value of acc will be acc + x, right? So we are collecting, in this example, the sums of the elements of the list in the variable acc. Now, the most important bit – note that acc is always the left operand of the function as in acc + xs (which, in function application form would be (+) acc xs). This is the reason why we call it “foldl” or “fold left” – we are simply reducing the list from the left to the end of the list!

So, for example:
foldl (+) 0 [1, 2, 3, 4, 5] can be expanded (conceptually) as:

foldl (+) 0 [1, 2, 3, 4, 5]
= foldl (+) (0+1) [2, 3, 4, 5
= foldl (+) ((0+1)+2) [3, 4, 5]
= foldl (+) (((0+1)+2)+3) [4, 5]
= foldl (+) ((((0+1)+2)+3)+4) [5]
= foldl (+) ((((((0+1)+2)+3)+4)+5) []
= (((((0+1)+2)+3)+3)+5) — from the first pattern in the definition of `foldl`.
= 15

As you can see, we move from left to right across the list, and keep accumulating the values as we go along. This is also the reason why foldl is much more efficient than foldr (as we will see).

Now, onto foldr. foldr is very similar to foldl, but whereas foldl fold from left to right, foldr fold from right to left! First, again, let’s look at the definition:

foldr :: (a -> b -> b) -> b -> [a] -> b
foldr f acc [] = acc
foldr f acc (x:xs) = f x (foldr f acc xs)

The type in plain English: “foldr is a function that takes three arguments – the first argument is a binary function that takes a value of type a and a value of type b, and produces a value of type b, the second argument is the “accumulator” of type b, and the final argument is the list of values of type a. The overall result is a single value of type b”.

Now taking the same example, let’s analyse the body:

1). foldr f acc [] = acc

This is identical to the first pattern in the body of foldl. This makes sense since once we have exhausted the list, we simply need to return the final result, which is stored in the running accumulator, acc.

2), foldr f acc (x:xs) = f x (foldr f acc xs)

Now, this is the tricky part and needs to be analysed carefully. The analogue of this definition for our sum example would be foldr (+) acc (x:xs) = x + (foldr (+) acc xs). What does this mean?

Well, since we want to process the list starting from the rightmost end of the list, we are forced to have to update the accumulator from the right and move towards the beginning of the list. This is what that definition basically means. So if you observe carefully, the value of the accumulator is actually given by the expression (foldr f acc xs) (or (foldr (+) acc xs) in our example). Remember that in the case of foldl, the accumulator was always the left operand. Likewise, in the case of foldr, the accumulator is always the right operand. This is why in the expression f x (foldr f acc xs), the second operand is the running state of the accumulator. Note that the full call itself cannot be fully evaluated until the (foldr f acc xs) part has been evaluated. This means that we keep on building up a stack of function calls, and only when the entire list has been consumed can we begin actually updating the value of the accumulator. This is why foldr is much slower and memory-intensive than foldl, which does not suffer from this deficiency.

To use the same example, let’s evaluate the call foldr (+) 0 [1, 2, 3, 4, 5]:

foldr (+) 0 [1, 2, 3, 4, 5]
= 1 + (foldr (+) 0 [2, 3, 4, 5])
= 1 + (2 + (foldr (+) 0 [3, 4, 5]))
= 1 + (2 + (3 + (foldr (+) 0 [4, 5])))
= 1 + (2 + (3 + (4 + (foldr (+) 0 [5]))))
= 1 + (2 + (3 + (4 + (5 + (foldr (+) 0 []))))) — we now match with the first pattern in the definition of `foldr`
= 1 + (2 + (3 + (4 + (5 + 0)))) — we now keep updating the accumulator all the way from right to left
= 1 + (2 + (3 + (4 + 5)))
= 1 + (2 + (3 + 9))
= 1 + (2 + 12)
= 1 + 14
= 15

If you observe carefully, the expressions are parenthesised from right to left whereas in the case of foldl, they were from left to right.

Now, you see that both foldl and foldr gave the same result 15 for our example. However, this is only because addition is a commutative property (just like multiplication), and that’s why it doesn’t matter if we fold from the left or from the right. So, multiplication would also give the same results for both foldl and foldr, but subtraction and division would not, since they are not commutative operations.

To confirm this assertion, just compare the outputs of the following expressions:

Prelude> foldr (+) 0 [1..5]
15
Prelude> foldl (+) 0 [1..5]
15

Prelude> foldr (*) 1 [1..5]
120
Prelude> foldl (*) 1 [1..5]
120

Prelude> foldr (-) 0 [1..5]
3
Prelude> foldl (-) 0 $ [1..5]
-15

Prelude> foldr (/) 1.0 [1.0, 2.0, 3.0, 4.0, 5.0]
1.875
Prelude> foldl (/) 1.0 [1.0, 2.0, 3.0, 4.0, 5.0]
8.333333333333333e-3

Let’s just analyse the subtraction example:

foldr (-) 0 [1..5]
= 1 – (foldr (-) [2, 3, 4, 5])
= 1 – (2 – (foldr (-) [3, 4, 5]))
= 1 – (2 – (3 – (foldr (-) [4, 5])))
= 1 – (2 – (3 – (4 – (foldr (-) [5]))))
= 1 – (2- (3 – (4 – (5 – (foldr (-) [])))))
= 1 – (2 – (3 – (4 – (5 – 0))))
= 1 – (2 – (3 – (4 – 5)))
= 1 – (2 – (3 – (-1)))
= 1 – (2 – 4)
= 1 – (-2)
= 3.

and

foldl (-) 0 [1..5]
= foldl (-) (0-1) [2, 3, 4, 5]
= foldl (-) ((0-1)-2) [3, 4, 5]
= foldl (-) (((0-1)-2)-3) [4, 5]
= foldl (-) ((((0-1)-2)-3)-4) [5]
= foldl (-) (((((0-1)-2)-3)-4)-5) []
= (((((0-1)-2)-3)-4)-5)
= -15.

Et voila!

Optional

In case you are familiar with C, the functions may be rendered thus, for instance:

// The generic foldr function
void *foldr(void *(*fptr)(void*, void*), void *acc, void *arr, size_t n, size_t delta)
{
    if (!n) {
        return acc;
    } else {
        return fptr(arr, foldr(fptr, acc, arr+delta, n-1, delta));
    }
}

// The generic foldl function
void *foldl(void *(*fptr)(void*, void*), void *acc, void *arr, size_t n, size_t delta)
{
    if (!n) {
        return acc;
    } else {
        return foldl(fptr, fptr(acc, arr), arr+delta, n-1, delta);
    }
}

Note how they map to the Haskell definitions directly.

If you are interested, here is a full program (note that this is only to simulate Haskell behaviour as closely as possible – it is not the idiomatic way to do the operations in C, and it leaks memory as well which we do not care to consider for this demo) that you can run to observe the behaviour and compare it with Haskell:

The program:

#include <stdio.h>
#include <stdlib.h>

void *add_int(void *x, void *y)
{
    int *res = malloc(sizeof(int));
    *res = *((int*) x) + *((int*) y);

    return (void*) res;
}

void *multiply_int(void *x, void *y)
{
    int *res = malloc(sizeof(int));
    *res = *((int*) x) * *((int*) y);

    return (void*)res;
}

void *subtract_int(void *x, void *y)
{
    int *res = malloc(sizeof(int));
    *res = *((int*) x) - *((int*) y);

    return (void*) res;
}

void *divide_double(void *x, void *y)
{
    double *res = malloc(sizeof(double));
    *res = *((double*) x) / *((double*) y);

    return (void*) res;
}

// The generic foldr function
void* foldr(void *(*fptr)(void*, void*), void *acc, void *arr, size_t n, size_t delta)
{
    if (!n) {
        return acc;
    } else {
        return fptr(arr, foldr(fptr, acc, arr+delta, n-1, delta));
    }
}

// The generic foldl function
void *foldl(void *(*fptr)(void*, void*), void *acc, void *arr, size_t n, size_t delta)
{
    if (!n) {
        return acc;
    } else {
        return foldl(fptr, fptr(acc, arr), arr+delta, n-1, delta);
    }
}

int sum_foldr(int a[], size_t n)
{
    int sum = 0, acc = 0;
    void *ret = foldr(add_int, &acc, a, n, sizeof(int));
    if (ret) {
        sum = *((int*) ret);
        free(ret);
    }

    return sum;
}

int product_foldr(int a[], size_t n)
{
    int prod = 1, acc = 1;
    void *ret = foldr(multiply_int, &acc, a, n, sizeof(int));
    if (ret) {
        prod = *((int*) ret);
        free(ret);
    }

    return prod;
}

int sum_foldl(int a[], size_t n)
{
    int sum = 0, acc = 0;
    void *ret = foldl(add_int, &acc, a, n, sizeof(int));
    if (ret) {
        sum = *((int*) ret);
        free(ret);
    }

    return sum;
}

int product_foldl(int a[], size_t n)
{
    int prod = 1, acc = 1;
    void *ret = foldl(multiply_int, &acc, a, n, sizeof(int));
    if (ret) {
        prod = *((int*) ret);
        free(ret);
    }

    return prod;
}

int sub_foldr(int a[], size_t n)
{
    int diff = 0, acc = 0;
    void *ret = foldr(subtract_int, &acc, a, n, sizeof(int));
    if (ret) {
        diff = *((int*) ret);
        free(ret);
    }

    return diff;
}

int sub_foldl(int a[], size_t n)
{
    int diff = 0, acc = 0;
    void *ret = foldl(subtract_int, &acc, a, n, sizeof(int));
    if (ret) {
        diff = *((int*) ret);
        free(ret);
    }

    return diff;
}

double div_foldr(double a[], size_t n)
{
    double div = 1.0, acc = 1.0;
    void *ret = foldr(divide_double, &acc, a, n, sizeof(double));
    if (ret) {
        div = *((double*) ret);
        free(ret);
    }

    return div;
}

double div_foldl(double a[], size_t n)
{
    double div = 1.0, acc = 1.0;
    void *ret = foldl(divide_double, &acc, a, n, sizeof(double));
    if (ret) {
        div = *((double*) ret);
        free(ret);
    }

    return div;
}

int main(int argc, char **argv)
{
    int a[] = { 1, 2, 3, 4, 5 };
    size_t n = sizeof(a)/sizeof(a[0]);

    double b[] = { 1.0, 2.0, 3.0, 4.0, 5.0 };
    size_t m = sizeof(b)/sizeof(b[0]);

    printf("sum_foldr = %d\n", sum_foldr(a, n));
    printf("product_foldr = %d\n", product_foldr(a, n));

    printf("sum_foldl = %d\n", sum_foldl(a, n));
    printf("product_foldl = %d\n", product_foldl(a, n));

    printf("sub_foldr = %d\n", sub_foldr(a, n));
    printf("div_foldr = %lf\n", div_foldr(b, m));

    printf("sub_foldl = %d\n", sub_foldl(a, n));
    printf("div_foldl = %lf\n", div_foldl(b, m));

    return 0;
}

Running it:

$ gcc -Wall -O2 -o fold fold.c && ./fold
sum_foldr = 15
product_foldr = 120
sum_foldl = 15
product_foldl = 120
sub_foldr = 3
div_foldr = 1.875000
sub_foldl = -15
div_foldl = 0.008333

Exactly the same result as in the case of Haskell.

The basic idea is this – take a simple example, and work through it with pen and paper to really understand how it works!

Mutual Recursion demo in Rust and Racket (inspired by Haskell)

This is a quick post on a Rust version of the Haskell evens and odds program demonstrating mutual recursion (as shown in Hutton’s book).

First off, the Haskell code:

  evens :: [a] -> [a]
  evens [] = []
  evens (x:xs) = x : odds xs

  odds :: [a] -> [a]
  odds [] = []
  odds (_:xs) = evens xs

A sample run:

*Main> let string = "abcde"
*Main> evens string
"ace"
*Main> odds string
"bd"
*Main> string
"abcde"

So the whole ideas is to have the evens functions display all the characters in even positions (starting from 0), and then odds function likewise display all the characters in odd positions.

The evens function acts as the actual accumulator whilst odds is only used as a trampoline for continuing the recursion.

Now, for a rough Rust version of it (preserving the original character array):


fn main() {
    fn evens<T: Copy>(xs: &[T]) -> Vec<T> {
        if xs.is_empty() {
            Vec::new()
        } else {
            cons(&xs[0], &odds(&xs[1..]))
        }
    }

    fn odds<T: Copy>(xs: &[T]) -> Vec<T> {
        if xs.is_empty() {
            Vec::new()
        } else {
            evens(&xs[1..])
        }
    }

    fn cons<T: Copy>(x: &T, xs: &[T]) -> Vec<T> {
        let mut vec = Vec::new();

        vec.push(*x);

        for e in xs.iter() {
            vec.push(*e);
        }
        vec
    }

    let string = String::from("abcde");

    println!("{}",
             String::from_utf8(evens(&string.clone().into_bytes())).unwrap());
    println!("{}",
             String::from_utf8(odds(&string.clone().into_bytes())).unwrap());

    println!("{}", string);
}

And a quick run:

Macushla:EvensAndOdds z0ltan$ rustc evens_and_odds.rs
Macushla:EvensAndOdds z0ltan$ ./evens_and_odds
ace
bd
abcde

So, as can be clearly seen, the original string is left unmodified. Of course this version looks quite dirty, but the nice bit is that &[T] accepts parameters of type Vec (or reference variants) and vice-versa. This enables using slicing extensively and naturally inside the functions. The vector copying code could, of course, be made to work with an API call, but I feel this is much better in its explicit form.

The Racket version looks much nicer, being more amenable to functional constructs than Rust:

#lang racket

(define (evens xs)
  (if (null? xs)
      '()
      (cons (car xs) (odds (cdr xs)))))

(define (odds xs)
  (if (null? xs)
      '()
      (evens (cdr xs))))

(define (main)
  (let ([string "abcde"])
    (displayln (list->string (evens (string->list string))))
    (displayln (list->string (odds (string->list string))))
    (displayln string)))

And a final run for the Racket version:

evens-and-odds.rkt> (main)
ace
bd
abcde

Interop mini-series – Calling C and C++ Callbacks from Common Lisp (Part 2c)

This post picks up on the first part of this interop mini-series (Calling C and C++ from Common Lisp). I recommend checking out that post first in order to make sense of this one.

Contents

  1. Intent
  2. Demo
  3. Useful functions
  4. Conclusion

Intent

The scope of this post is to cover interop with C and C++ code from Common Lisp using callbacks. In case you are not sure about what callbacks are, please check the first part of this post out – Callbacks special.

We will continue to use the cffi library for our demo here as well.

Demo

Top

For this demo, let’s pick a very simple example.

We have a person type which has the following slots/fields – name, gender, and age. From our Common Lisp code, we want to instantiate an instance of person, and then use a function in a native library, prefix_name to append either “Mr.” or “Miss” in front of the person’s name, depending on the value of the gender slot (0 for female, anything else for male).

First we define the interface for the native library (in callback_demo.h:

#ifndef __CALLBACK_DEMO_H__
#define __CALLBACK_DEMO_H__ "callback_demo.h"

typedef struct person {
    char* name;
    int gender;
    int age;
} person;

#ifdef __cplusplus
extern "C" {
#endif
    void prefix_name(person*, void (*)(person*));
#ifdef __cplusplus
}
#endif
#endif

We then write the code containing the prefix_name function that will invoke our callback function (in callback_demo.c:

#include 
#include 
#include 
#include "callback_demo.h"

#define MAXSIZE 50

char* concatenate_names(const char* prefix, char* name)
{
    int len = strlen(prefix) + strlen(name) + 1;

    char* full_name = (char*)malloc(len * sizeof(char));

    if (full_name != NULL) {
        char* cp = full_name;

        while (*prefix != '\0')
            *cp++ = *prefix++;

        *cp++ = 0x20;

        while (*name != '\0')
            *cp++ = *name++;
   
         *cp = '\0';

        return full_name;
    }
   return name;
}
   

void prefix_name(person* p, void (*cb)(person*))
{
    const char* MISTER = "Mr.";
    const char* MISS = "Ms.";
    char* res = NULL;

    // 0 - female, anything else male
    res = p->gender == 0 ? concatenate_names(MISS, p->name) :
                           concatenate_names(MISTER, p->name);
    strcpy(p->name, res);
    
    (*cb)(p);
}

void sample_callback(person* p)
{
    printf("%s, %s, %d\n", p->name, p->gender == 0 ? "Female" : "Male", p->age);
}

int main()
{
    person rich;

    rich.name = (char*)malloc(MAXSIZE * sizeof(char));
    strcpy(rich.name, "Rich");
    rich.gender = 1;
    rich.age = 49;

    prefix_name(&rich, &sample_callback);
    
    return 0;
}

Explanation: The code is relatively straightforward. As can be seen from the header file, prefix_name is the entry point to the library (and the one which gets invoked from the Common Lisp code).

The prefix_name function takes an instance of the person structure as well as a callback function. Note the signature of the callback function:

void (*)(person*).

This callback function expects to be passed a modified instance of the person instance that is the first parameter of the prefix_name function.

The logic is very simple – simply check for the gender field, and then depending on whether it is 0 or some other way, update the name field of the person instance by prepending “Miss” or “Mr.” respectively.

Finally, the callback function cb is invoked, passing control back to the client code.

All right, now we compile the code into a library, libcallbackdemo.dylib:

Timmys-MacBook-Pro:Demo z0ltan$ clang -dynamiclib -o libcallbackdemo.dylib callback_demo.c

Timmys-MacBook-Pro:Demo z0ltan$ ls
callback_demo.c		callback_demo.h		libcallbackdemo.dylib

Excellent!

Now we focus on the Common Lisp bit. This part is relatively straight forward. Let’s see the code in action first, and then a bit of explanation.

First the code that calls the native library function, prefix_name (in c-to-lisp.lisp):

;;;; Demonstrating how Common Lisp can invoke functions in C or C++ code, which then themselves invoke a callback function written in Common Lisp.
;;;; This helps in those cases when Common Lisp needs to make use of some 
;;;; functionality present in a native library which is written using callbacks.

(require 'cffi)

(defpackage :c-to-lisp-user
  (:use :cl :cffi))

(in-package :c-to-lisp-user)


;;; Callback demo - first define the foreign library
;;; containing the function which takes a callback function.

(define-foreign-library libcallbackdemo
  (:darwin "libcallbackdemo.dylib")
  (:unix "libcallbackdemo.so")
  (t (:default "libcallbackdemo.dylib")))

(use-foreign-library libcallbackdemo)

;;; define Common Lisp equivalent of the C structure
(defcstruct person
  (name :string)
  (gender :int)
  (age :int))


;;; define the implementation of the callback
(defcallback print-prefixed-person :void
    ((ptr (:pointer (:struct person))))
  (with-foreign-slots ((name gender age) ptr (:struct person))
    (format t "Name: ~a, Gender: ~a, Age: ~d~%"
            name
            (if (zerop gender) "Female" "male")
            age)))


;;; invoke the callback in the C library with a new instance of
;;; a person object
(defun test-callback ()
  (with-foreign-object (rich '(:struct person))
    (setf (foreign-slot-value rich '(:struct person) 'name) "Rich"
          (foreign-slot-value rich '(:struct person) 'gender) 1
          (foreign-slot-value rich '(:struct person) 'age) 49)
    (foreign-funcall "prefix_name"
                     :pointer rich
                     :pointer (callback print-prefixed-person)
                     :void))
  (with-foreign-object (vigdis '(:struct person))
    (setf (foreign-slot-value vigdis '(:struct person) 'name) "Vigdis"
          (foreign-slot-value vigdis '(:struct person) 'gender) 0
          (foreign-slot-value vigdis '(:struct person) 'age) 28)
    (foreign-funcall "prefix_name"
                     :pointer vigdis
                     :pointer (callback print-prefixed-person)
                     :void)))

;;; unload the foreign library
(close-foreign-library 'libcallbackdemo)

And the output:

C-TO-LISP-USER> (test-callback)
Name: Mr. Rich, Gender: male, Age: 49
Name: Ms. Vigdis, Gender: Female, Age: 28
; No value

Explanation: This code is also quite simple. We begin by defining the native library, and then loading it.

Next, we define the callback function using the cffi:defcallback macro. The defined callback function, print-prefixed-person uses a pointer to a person instance (which is returned by the prefix_name function inside libcallbackdemo.dylib), and so need to define the person structure first.

For that, we use another macro, cffi:defcstruct. As you can see, there is simply an exact representation of the structure defined in callback_demo.h albeit in a Lispy manner.

cffi:with-foreign-slots is a very important macro that destructures its pointer argument into the supplied slots. Note that the slot names must be the same as that provided in the person structure defined in the Common Lisp code. Note the use of cffi:foreign-slot-value instead of cffi:mem-aref as in the previous post. The rule of thumb is this – use cffi:foreign-slot-value when accessing slots, and use cffi:mem-aref when accessing atomic types.

Finally, we actually invoke the prefix_name function from test_callback. We create two instances of the person structure, and then we pass the callback function in the foreign-funcall invocation using the macro cffi:callback.

cffi:callback simply returns a pointer which is what the prefix_name function in libcallbackdemo.dylib requires. The cycle is complete!

As we can see from the output, the names are prepended with the correct suffix.

Basic useful functions

Top

Here is the summarised list of the additional functions that were used in the demo:

  • cffi:defcstruct
  • cffi:defcallback
  • cffi:with-foreign-slots
  • cffi:foreign-slot-value
  • cffi:callback

Conclusion

Top

The cffi library is a very powerful and well-designed library for dealing with native libraries. It is also quite vast, and I would most definitely recommend browsing through the official manual for further examples, and also for usage patterns for your specific needs.

Next up, I will demonstrate interop between C (and C++) and Java using the JNA library, which is far superior to the alternative of using pure JNI. That will be also be in two parts.

Interop mini-series – Callbacks special! (Common Lisp special) (Part 2b)

This is a continuation of the previous post callbacks interlude. I decided to give the section pertaining to Common Lisp its own post as I think there is some good educational value in this part itself.

We carry on from where we left off last time. We continue with the same squaring number callback example.

As a quick refresher, the idea is to implement a synchronous callback scenario. The function client invokes another function squarify which squares the passed value and invokes a callback function callback.

How it’s done in Common Lisp

Let’s start off with our first attempt to implement the solution in Common Lisp.

;;;; Callback demo using the squarify example.

(defpackage :callback-demo-user
  (:use :cl))

(in-package :callback-demo-user)

(defun callback(n)
  (format t "Received: ~d~%" n))

(defun squarify(n cb)
  (funcall cb (* n n)))

(defun client ()
  (let ((n (progn
             (princ "Enter a number: ")
             (read))))
    (squarify n #'callback)))
CALLBACK-DEMO-USER> (client)
Enter a number: 19
Received: 361
NIL

That’s the direct equivalent of all the demos shown so far. However, since Common Lisp is a functional language (albeit not as pure as, say, Scheme or Haskell), we can certainly do better!

In most Functional Programming languages, higher order functions are usually deployed to do the job. So let’s see if we can cook up something nicely functional like function composition.
Here’s a first attempt:

(defun client()
  (funcall #'(lambda (n)
               (format t "Received: ~d~%" n))
           (funcall #'(lambda (n)
                        (* n n))
                    (funcall #'(lambda ()
                                 (princ "Enter number: ")
                                 (read))))))

Which produces:

CALLBACK-DEMO-USER> (client)
Enter number: 19
Received: 361
NIL

As expected! Now, as you may know, funcall simply takes a function and some arguments (optional), and applies the function to those arguments. In this case, we simply compose them in the proper order so that the types match up: read a number -> square it -> print message.

However, let’s work our way to a generic compose function that simulates the behaviour of Haskell’s composition operator. The previous function can be improved by defining a new version that composes the three functions in the mentioned order (so as to match types):

The compose function:

(defun compose (fn gn hn)
  #'(lambda (&rest args)
      (funcall fn (funcall gn (apply hn args)))))

And the client to test it:

(defun client ()
  (funcall (compose #'(lambda (x)
                        (format t "Received: ~d~%" x))
                    #'(lambda (x)
                        (* x x))
                    #'(lambda ()
                        (princ "Enter a number: ")
                        (read)))))

And the output is the same:

CALLBACK-DEMO-USER> (client)
Enter a number: 19
Received: 361
NIL

So what’s changed? Well, taking inspiration from the nested funcall function, we defined compose to invoke the functions in the proper order – first read the number, and then square it, and then finally print it! (Remember that the functions are composed in reverse order in which they are entered).

Note that the last function invocation is done using apply instead of funcall because &rest args produces a list of arguments, and funcall does not work with that (unless the function definition takes a list itself as a parameter, but that is not the general case, and apply works very well with lists and destructures them correctly.

How can we make this generic enough though? We notice the pattern – we invoke apply on the innermost function call, but we use funcall for the rest of the function call chain. This means that we must handle two cases – if there is a single function passed in, we should simply use apply on that, and if not, we should take care to chain them up as discussed. This lends itself to a nice recursive definition as shown next.

The updated compose function:

(defun compose (&rest funcs)
  (labels ((f (funcs args)
             (if (null (cdr funcs))
                 (apply (car funcs) args)
                 (funcall (car funcs) (f (cdr funcs) args)))))
    #'(lambda (&rest args)
        (f funcs args))))
)

And the test client for it:

(defun client ()
  (funcall (compose #'(lambda (x)
                        (format t "Received: ~d~%" x))
                    #'(lambda (x)
                        (* x x))
                    #'(lambda ()
                        (princ "Enter number: ")
                        (read)))))

And the output:

CALLBACK-DEMO-USER> (client)
Enter number: 19
Received: 361
NIL

Explanation: What we do is simply generalise the three-function version of compose into a generic function. For this we, define an internal function f that takes the supplied functions and the arguments as input.

f then recursively decomposes the function applications. The base condition (stopping condition) is when there is only one function left. The (if (null (cdr funcs)) bit then takes care to return the only apply call that we need, and that is of course, applied to the args argument.

As the recursion unwinds the call stack, successive funcallS are applied at each stage. This is exactly in line with the algorithm discussed at the end of the last section.

Now we are almost home and dry! Pay special attention to the order in which the lambda equivalents of the functions are entered in the client function. They are applied in the following order – callback, squarify, and then client.

We could stop here, but there’s one more change that we can make. The current version of compose works absolutely as expected, but the intuitive order of supplying functions is the opposite of what we could expect as a user. The expected order would be, in English, “read in the number, square it, and then print out a message indicating that the number was received”.

Let’s fix that last bit for out final version of compose.

Final version of compose:

;;; final version of compose
(defun compose(&rest funcs)
  (labels ((f (funcs args)
             (if (null (cdr funcs))
                 (apply (car funcs) args)
                 (funcall (car funcs) (f (cdr funcs) args)))))
    #'(lambda (&rest args)
        (f (reverse funcs) args)))))

And the corresponding test code:

;;; test out the final version of compose
(defun client ()
  (funcall (compose #'(lambda ()
                        (princ "Enter a number: ")
                        (read))
                    #'(lambda (x)
                        (* x x))
                    #'(lambda (x)
                        (format t "Received: ~d~%" x)))))

And now let’s test out and see if it works!

CALLBACK-DEMO-USER> (client)
Enter a number: 19
Received: 361
NIL

Success!

The only difference is this line: (f (reverse funcs) args). We simply reverse the order of the received functions while passing it to the recursive function f, and the rest of the code remains exactly the same!

And, of course, this is purely functional! Sweet, ain’t it?

The compose function could be optimised in multiple ways – converting it to an iterative version for instance, but conceptually, this works exactly as advertised.

Conclusion

This post illustrates why I love Common Lisp! Even as I make my journey through the world of Common Lisp, my admiration for it only grows. If there is some feature that we would like to incorporate into the language, it can be done in a just a few lines of code! No other language truly comes close in terms of expressiveness and extensibility.

Functional Programming – a Quick and Dirty Introduction

What is Functional Programming?

Functional Programming is, despite its newfound popularity in recent times, a rather old programming paradigm. What exactly constitutes Functional Programming is a surprisingly difficult concept to explain today. Part of the reason why that is so is because of the fact that almost every language tends to do things in its own way and call it “functional”, and even developers tend to use the term for a wide variety of situations. In a sense, it has been reduced to a mere buzzword. Ultimately, there is no real consensus as to what exactly Functional Programming is. In this blog, I will try and give my own perspective on the whole situation and try and make sense of this highly messy situation we find ourselves in today.

Functional Programming is, historically speaking, considered to be a paradigm of programming in which programming is done using only functions. What do we mean by function? We, of course, mean mathematical functions. A function in mathematics is a mapping of each value taken from a set of values (called the Domain of the function) to a unique value in another set (which may be the same set, and which is called the Range of the function). There are various types of functions – bijective, injective, surjective, identity, constant, empty, etc. However, we need not bother ourselves with the distinction. The only important bit to remember is that all functions (whatever sub-type they may be) have the following two essential properties:

  • Each element in the Range set is used for “mapping”
  • Each such element in the Range had a unique value in the Range

For instance, suppose we have f(x) = sin(x), f(x) here defines a function. Let us plot for this function to better understand this function:

sinx

Here we have plotted the graph for the f(x) using Wolfram Alpha. The function is clearly periodic (in fact, it has a period of 2π radians), and we can clearly see from the graph that the Domain is the set of all real numbers, and the Range is the set of real numbers in the bounded range [-1, 1]. It is easy to see that both the constraints incumbent upon a function are fulfilled – each real number in the Domain is used (continuous curve), and each real number in the Domain has a unique value in the Domain (single curve). Note that two elements in the Domain may have the same value in the Range. There is no restriction on that. So just why are we flogging this horse to death? Well, it’s because it is vitally important to unambiguously understand what a function is and what it is not. A lot of times, people mistake a relation for a function. A relation is a superset of a function. A relation may map the same element in the Domain to multiple elements in the Range. For instance, the relation “is a son of” is a Relation and not a Function. This is because a person is the son of both his mother as well as his father. What we’re interested in in Functional Programming is the concept of a function.

Functional Programming is therefore a paradigm (note the emphasis – the reason will shortly become clear) of programming in which functions form the basis of all processing. Historically, it has its earliest roots in the Lambda Calculus. Alonzo Church developed this beautifully elegant framework early in the 20th Century, and in its most extreme form, even data was created using only single-parameter functions (Church Numerals). However, even if we assume that data exists in a different world compared to functions, the basic implication of using the idea of a mathematical function in programming is that for every input to a function, we get a unique value, and that this is repeatable ad infinitum. This is also known as “referential transparency”, which means that in compiled languages, if we can determine the parameter to the function, we can replace function calls with that parameter to a constant! This behaviour also means that we can run such functions in parallel, and testing becomes trivial.

As mentioned in the last paragraph, a language doesn’t necessarily have to ensure that it supports only functions (in a mathematical sense). The only constraints are writing functions that generate their output only on the basis of their parameters (no globals involved), always generating the same output for the same argument(s), and ensuring that no side-effects take place within the body of the function. This means that any language can be written in a functional style. However, as we will see in the next section, there are some concepts that are considered the core concepts of Functional Programming, and different languages come with different levels of built-in support for these concepts.

Functional Programming concepts

Functional Programming today is considered to consist of the following core ideas:

    • First-Class functions and Higher-Order functions:These are related concepts which (from a programming perspective) mean that in a language that supports first-class functions (or higher-order functions), a function is treated on par with any other language entity. This means that functions can be bound to variables, defined within other functions, passed to other functions, and returned from other functions. Most of the strength of Functional Programming derives from these features, especially from two crucial ideas – Currying, and Partial Application.

      Currying and Partial Application are intimately connected with each other. To understand them better, let us take the help of a simple example. Consider that we wish to write a function that takes three parameters and produces a result. Here’s how we might write it in Haskell:

      mult :: (Num a) => (a, a, a) -> a
      mult (x, y, z) = x * y * z
      
      *Main> mult (1, 2, 3)
      6
      

      As expected, it evaluates the product correctly. Nothing special here – this is exactly how we would write this function in any run-of-the-mill imperative language.

      Now let’s try something else. Let’s try to write this as a function that will accept one parameter at a time:

      mult' :: (Num a) => a -> (a -> (a -> a))
      mult' x y z = x * y * z
      
      *Main> let f = mult' 1
      *Main> :t f
      f :: Num a => a -> a -> a
      
      *Main> let g = f 2
      *Main> :t g
      g :: Num a => a -> a
      
      *Main> let h = g 3
      *Main> :t h
      h :: Num a => a
      
      *Main> h
      6
      

      So what changed compared to the previous function? Observe that the type of the function has changed. Previously, it required a triplet as a single parameter. Here’s how one would read the type signature: mult’ is a function that takes a numeric parameter and returns a function that takes a numeric parameter and returns a function that takes a numeric parameter and which finally returns a numeric value as the return value of the function.

      The main advantage of this is that this allows us to make our multiplication function out of functions of a single parameter each. To notice that better, look at the type signatures of ‘f’, ‘g’, and ‘h’ at each stage. They correspond exactly with the verbal description of mult’. Functions like mult’, which implement multi-parameter functions in terms of functions that take a single parameter each are called “curried” functions. ‘f’, ‘g’, and ‘h’ demonstrate “partial application” of the overall mult’ function whereby we can build up a series of functions from a curried function by supplying an argument at a time. Yes, even ‘h’, which is a value can be considered to be a function in Functional Programming land! First-class functions, remember? Also note how both these concepts point back to the Lambda Calculus as discussed in the previous section. In fact, in order to make this equivalence blatantly clear, we could implement mult’ explicitly in terms of lambdas of a single parameter each (note that \x is the Haskell syntax for a lambda expression that takes the parameter ‘x’):

      mult'' :: (Num a) => a -> a -> a -> a
      mult'' = \x -> \y -> \z -> x + y + z
      
      Main> let f = mult' 1
      *Main> :t f
      f :: Num a => a -> a -> a
      
      *Main> let g = f 2
      *Main> :t g
      g :: Num a => a -> a
      
      *Main> let h = g 3
      *Main> :t h
      h :: Num a => a
      
      *Main> h
      6
      

      Exactly the same result as expected. In fact, all functions in Functional Programming can be considered to be built up of single-parameter functions. Nifty, isn’t it?

    • Pure Functions:
      A pure function is a function whose return value is determined entirely by its arguments, and which returns the same value for the same set of arguments each time the function is invoked. As mentioned earlier, this also means that the function must not have any side-effects (such as modifying a global variable, or even accessing the value of a mutable global variable). This also automatically provides a favourable property called “Referential Transparency” as explained before.Since I/O is the major source of side-effects, how can we do anything useful with such constaints? Well, different languages deal with this conundrum in different ways – Haskell tries to remain purely functional by using the concept of a “Monad”. Specifically, it uses the IO Monad to wrap the IO operations within an abstraction that takes the current state of the world, performs the modifications, and then returns this modified state as the new state of the world. Mathematically speaking, this ensures purity in Haskell even during IO, but of course, side-effects have happened in the meantime and the whole point appears moot. Other languages such as Clojure opt for a more pragmatic approach and accept some side-effects as a fact of life. Clojure provides some very efficient immutable data-structures that ensure that mutation is almost never needed. It also has a very strong STM (Shared Transactional Memory) to wrap IO operations within transactions that ensure that the state of the entire system remains consistent. Yet other languages such as Common Lisp put the onus of ensuring proper handling of side-effects on the user.
    • Recursion:Recursion is the process whereby a function calls itself (or another function). This is, apart from Higher-Order Functions, the most important concept in practical Functional Programming. In fact, most of the earlier Lisps, especially Scheme, depended tremendously on recursion as a substitute for looping (which is another major source of side-effects). This is why most of the languages in the Lisp family (especially Common Lisp and Scheme) have highly-optimised implementations of operations on the list data-structure (a list is how S-expressions in Lisp are represented).

      Strangely enough, in Common Lisp, the standard does not mandate Tail-Call Optimisation (TCO). However, most implementations come bundled with TCO. TCO is simply a way of converting recursive calls in the “tail-call position” to a simple branch (goto). This ensures that the stack does not blow up when executing deeply recursive calls. To get an idea of this, consider the following example:

      The factorial of a number is defined as:

      n! = \begin{cases}  1 & if n = 0\\  n(n-1) & if n \geq 1  \end{cases}

      This may be directly translated into code in Common Lisp as:

      CL-USER> (defun factorial (n)
                 (if (zerop n)
                     1
                     (* n (factorial (1- n)))))
      FACTORIAL
      
      CL-USER> (factorial 20)
      2432902008176640000
      

      However, the above code will eventually blow up the stack even for small values of ’n’. To ensure that we don’t end up with stack overflow errors, let’s convert this program into a TCO version of the same algorithm:

      CL-USER> (defun factorial (n)
                 (labels ((f (n acc)
                            (if (zerop n)
                                acc
                                (f (1- n) (* acc n)))))
                   (f n 1)))
      FACTORIAL
      
      CL-USER> (factorial 100)
      93326215443944152681699238856266
      70049071596826438162146859296389
      52175999932299156089414639761565
      18286253697920827223758251185210
      916864000000000000000000000000
      

      How is this function TCO but not the previous one? Remember that a function’s arguments are always evaluated before the function call in made. In the previous case, the * operator is the culprit – it tries to multiply the recursive call and the value of n. This ensures that the current state of the function is saved on the stack, and finally the stack is unwound, the multiplications performed, and the result returned.

      In the second example, we remove this binding and ensure that the recursive call is independent of the value of the ‘acc’ variable, which contains the running product which is the factorial of the supplied argument. This is called TCO. Note though that many (if not all) Common Lisp implementations may convert the first example into a TCO version anyway. However, in many cases, the compiler can’t perform this magic conversion, and the programmer has to ensure that the recursion is indeed TCO.

    • Strict vs Non-Strict (Lazy):
      Strict evaluation evaluates an expression and returns the complete value immediately whereas non-strict (better known as lazy evaluation) evaluation only generates as many results as are currently requested.Strict evaluation has the advantage that performance is deterministic with such an evaluation strategy. The main disadvantage is that certain operations are not possible with such a strategy. For instance, suppose we wanted to process a series of numbers and we do not know upfront how many of these numbers we’ll be using. Under strict evaluation this is not possible. We have to know the size of the data set before processing it and if we try to store a large data set, it might cause the whole program to crash.

      Non-strict evaluation solves this very situation by processing only as many elements as currently required (this is how generators, used extensively in Python, are built). This enables us to even define infinite streams of data! Non-strict evaluation also has the added benefit that not all branches of a computation need be done. In an eager language, this is not really possible. The main problem with non-strict evaluation is that performance is non-deterministic. This can lead to issues when trying to profile a large software application. This is also the reason that previously purely non-strict languages such as Haskell are providing more and more support for strict evaluation while keeping non-strict evaluation as the default mode. Clojure does this as well.

    • Strong Type Systems:A type system is a way of assigning a property called a “type” to entities in the programming language such that they may be classified into sets whose collective properties can then be defined, and which then allows the compiler to check for invalid programs during compilation itself. Even though it may appear that a type system is exclusively associated with statically-typed languages, that is not so. Even dynamic languages may be strongly (Common Lisp) or weakly (JavaScript) typed, and the environment is responsible for defining correct behaviour. The only difference between those two categories of typed languages is that statically-typed languages provide us with a mechanism to explicitly specify types for program entities ourselves.

      My personal opinion is that this is not really a core concept of Functional Programming. However, most experts today claim that this is a vital pillar of the whole paradigm. In any case, one cannot truly work with modern functional languages unless one has a good understanding of the different types of type systems in place today. Most of the statically typed functional languages have type systems based on the Hindley-Milner system. Haskell is one such example. Scala, however, despite being strongly typed, cannot fully support Hindley-Milner because of its OOP nature.

Conclusion and Next Steps

This was a brief but comprehensive introduction to what constitutes Functional Programming today. In the next few posts, I will examine these concepts in a few modern languages (whether considered functional by the Hoi Polloi or otherwise): Common Lisp, Java, and C++. In the near future, I will also examine the same for Python, and most importanly, Haskell. Haskell is arguably the most prominent purely functional language in use today. I will (in the not-too-distant future) write a series of posts discussing and researching Functional Programming in-depth, and my vehicle of choice for that endeavour will be Haskell!

Streams in Java – a Hand-on Approach

In this blog post, I will discuss the Streams feature in Java 8 (and above) from a pragmatic viewpoint. I will describe it and its uses as I have experienced myself.

What is this Stream business after all?

Some people new to Java often get confused with the term “stream”. Java (before version 8) already used “streams” to mean essentially the same thing that C and C++ used the term for — a conceptual stream of data — whether it be bytes or characters. This is why there is a plethora of interfaces and classes in the java.util.io package that has “stream” in its name. However, this is not what the new Streams API is all about.

In Java 8, a lot of features were introduced that enhanced support for Functional Programming in Java – lambdas, method references, and streams. A Stream in this new context is simply a way of streaming data through a set of operations that process the data at every stage of processing. It does not really matter how the data came to be in the first place — it might have been read in from a file, from a socket, from another String, from a database , etc. What matters though is that the data can essentially be passed through different functional operations and then collected into another data structure or printed out to some output (console, file, database, another stream). The important part is that the data at every stage is preserved — there is no modification of the original data structure (as will be demonstrated through examples later on). This is the reason why it supports functional programming concepts and makes for very terse and (for those who are familiar with it) readable code. Of course, there are pros and cons with this new feature.

Streams therefore operate at a much higher conceptual level than merely processing data through methods.

The Hitchhiker’s Guide to the Streams API in Java

The Stream API is specified in the java.util.stream package. There are a few very important interfaces defined here:

  • Stream
  • IntStream
  • LongStream
  • DoubleStream
  • Collector

Of these, we don’t really make use of Stream and Collector all that much directly. Instead, we use them through custom classes that implement these interfaces. The Stream interface is pretty much implemented by all major linear interfaces – List, Set, Queue, and Deque. Note that this support is provided by virtue of the fact that the Collection interface has a “default method”, default Stream stream() and also its parallel counterpart, default Stream parallelStream(). The reason (as mentioned in an earlier post) that the Collection interface itself doesn’t extend the Stream interface is in order to ensure that legacy code is not broken. By its very nature, streams are not implemented for non-linear data structures such Map. Only linear data structures have a well-defined way of streaming their data for processing by various operations.

The available classes in this package are:

  • Collectors
  • StreamSupport

Of these two, we will only ever be concerned with Collectors. StreamSupport provides low-level APIs that library writers can utilise to create their own versions of Streams.

The Collectors class is an extremely important class that provides methods that return a Collector object. Further processing is then done by the methods of the Stream interface, such as the “collect” method. This will become much clearer when we get down to examples.

Examples of Streams in action

So let’s down and dirty with it! First off, let’s start up a JShell session and get some of the basic imports that we need for the demo out of the way:

jshell> import java.util.function.*

jshell> import java.util.stream.*

jshell> /imports
|    import java.io.*
|    import java.math.*
|    import java.net.*
|    import java.util.concurrent.*
|    import java.util.prefs.*
|    import java.util.regex.*
|    import java.util.function.*
|    import java.util.stream.*
|    import java.util.*

As we can see, many of the most commonly used packages are already imported by default in JShell. However, the java.util.function and java.util.stream packages need to be explicitly imported.

Now that we’ve got that out of the way, let’s start off with some simple examples.

  • Let’s create a list of names, filter out those names which are longer than 3 characters, convert them to upper case, collect them in a set (remove duplicates), and then print them all out in order:
    jshell> List names = Arrays.asList("Pat", "Mike", "Sally", "Robert", "Sam", "Matt", "Timmy", "Gennady", "Petr", "Slava", "Zach", "Paula", "Meg", "Matt", "Mike");
    names ==> [Pat, Mike, Sally, Robert, Sam, Matt, Timmy, Gennady, Petr, Slava, Zach, Pau ...
    
    jshell> names.stream().
       ...> filter((s) -> s.length() > 3).
       ...> map(String::toUpperCase).
       ...> collect(Collectors.toSet()).
       ...> forEach(System.out::println)
    GENNADY
    MIKE
    ZACH
    TIMMY
    PETR
    SLAVA
    ROBERT
    MATT
    SALLY
    PAULA
    

    Note that String::toUpperCase represents what is known as a “method reference”. They can also be used as the target of a Functional Interface. In this case, String::toUpperCase as used in the context of the “map” operation is equivalent to the following lambda expression:

     map((s) -> s.toUpperCase())
    

    It is just more convenient and terse to use method references whenever possible in lieu of lambda expressions, especially for well known methods such as in this case.

    And just to emphasise the point that the actual data structure itself has not been modified:

    jshell> names.forEach(System.out::println)
    Pat
    Mike
    Sally
    Robert
    Sam
    Matt
    Timmy
    Gennady
    Petr
    Slava
    Zach
    Paula
    Meg
    Matt
    Mike
    

    The same code before Java 8 might look something like so:

    jshell> interface FilterPredicate {
       ...>          boolean test(T t);
       ...>      }
    |  created interface FilterPredicate
    
    jshell>      
    
    jshell>      interface MapFunction {
       ...>          R apply(T t);
       ...>      }
    |  created interface MapFunction
    
    jshell>
    
    jshell> List filterNames(List names, 	     FilterPredicate predicate)    {
       ...>          List filteredNames = new ArrayList();
       ...>          
       ...>          for (String name : names) {
       ...>              if (predicate.test(name))
       ...>                 filteredNames.add(name);
       ...>          }
       ...>          return filteredNames;
       ...>      }
    |  created method filterNames(List,FilterPredicate)
    
    
    
    jshell>
    
    jshell>      List mapNames(List names, MapFunction mapper) {
       ...>          List mappedNames = new ArrayList();
       ...>          
       ...>          for (String name : names) {
       ...>              mappedNames.add(mapper.apply(name));
       ...>          }
       ...>          
       ...>          return mappedNames;
       ...>      }
    |  created method mapNames(List,MapFunction)
    
    jshell>      
    
    jshell>      Set collectNames(List names) {
       ...>          Set uniqueNames = new HashSet();
       ...>          
       ...>          for (String name : names) {
       ...>              uniqueNames.add(name);
       ...>          }
       ...>          
       ...>          return uniqueNames;
       ...>      }
    |  created method collectNames(List)
    
    jshell> Set processedNames = 
       ...>             collectNames(mapNames(filterNames(names, 
                               new FilterPredicate() {
       ...>                     @Override
       ...>                     public boolean test(String s) {
       ...>                         return s.length() > 3;
       ...>                     }}), new MapFunction() {
       ...>                         @Override
       ...>                         public String apply(String name) {
       ...>                             return name.toUpperCase();
       ...>                         }
       ...>                     }));
    processedNames ==> [GENNADY, MIKE, ZACH, TIMMY, PETR, SLAVA, ROBERT, MATT, SALLY, PAULA]
    
    jshell>                     
    
    jshell>      for (String name : processedNames) {
       ...>         System.out.println(name);
       ...>      } 
    GENNADY
    MIKE
    ZACH
    TIMMY
    PETR
    SLAVA
    ROBERT
    MATT
    SALLY
    PAULA
    

    Wow! So much of code (and a lot of it boilerplate, especially the repetitive for-each looping) to achieve what essentially took one line using Java Streams! Also, from a readability perspective, I would argue that the Streams based version is much more readable and understandable. A lot of the boilerplate in the non-Streams version disrupts one’s flow when reading that code — instead of focusing on the task at hand, one is distracted by all the language specific cruft. Also note that the pre-Java 8 version of the code was written in a somewhat different way compared to what a lot of Java developers would do. If we had written out the code in a purely imperative manner, it would be at least twice as long with a lot more cognitive dissonance to boot.

  • For the second example, let us generate an infinite stream of natural numbers, take the first 100 numbers, filter out the even numbers, double each of them, and finally generate their sum.
    jshell> IntStream.iterate(1, (n) -> n+1).
       ...> limit(100).
       ...> filter((d) -> d%2 == 0).
       ...> map((i) -> i*2).
       ...> sum()
    $29 ==> 5100
    

    How much code would that take without using Streams? Heh. Some explanation of the code: we use the “iterate” method of the IntStream interface to generate an infinite list of natural numbers by using 1 as the seed value and the second parameter (the lambda expression) as a sort of generator function. Then we use “limit” to take the first 100 instances, filter out the even numbers, map each filtered value to double its value and finally call the terminal operation “sum” to collect the sum of the processed stream of numbers.

    At this juncture, it would probably be appropriate to comment that there are two types of operations when it comes to streams – non-terminal operations (which take values and produce processed value), and terminal operations (which simply consume values and generate a final value). “limit”, “filter”, and “map” are examples of the former whereas “sum” is, as noted, a terminal operation. Knowing this distinction can save a lot of headache when code seemingly doesn’t work as expected.

  • For the final example, let us group together a bunch of students by grade:
    jshell> enum Grade {
       ...>         A, B, C, D, E, F;
       ...>     }
    |  created enum Grade
    
    jshell>
    
    jshell>     class Student {
       ...>         private int id;
       ...>         private String name;
       ...>         private Grade grade;
       ...>         
       ...>         public Student(int id, String name, Grade grade) {
       ...>             this.id = id;
       ...>             this.name = name;
       ...>             this.grade = grade;
       ...>         }
       ...>         
       ...>         public int getId() { return this.id; }
       ...>         public String getName() { return this.name; }
       ...>         public Grade getGrade() { return this.grade; }
       ...>         
       ...>         @Override
       ...>         public String toString() {
       ...>             return "{ " + this.id + ", " + this.name + ", " + this.grade + " }"; 
       ...>         }
       ...>     }
    |  created class Student
    
    jshell>     
    
    jshell>     List students = 
       ...>     Arrays.asList(new Student(1, "Rich", Grade.F),
       ...>                   new Student(2, "Peter", Grade.A),
       ...>                   new Student(3, "Sally", Grade.B),
       ...>                   new Student(4, "Slava", Grade.B),
       ...>                   new Student(5, "Megan", Grade.C),
       ...>                   new Student(6, "Edward", Grade.D),
       ...>                   new Student(7, "Amanda", Grade.A),
       ...>                   new Student(8, "Petr", Grade.B),
       ...>                   new Student(9, "Susan", Grade.F),
       ...>                   new Student(10, "Arnold", Grade.E));
    students ==> [{ 1, Rich, F }, { 2, Peter, A }, { 3, Sally, B }, { 4, Slava, B }, { 5, Meg …
    
    jshell>
    
    jshell> Map<Grade, List> studentData = 
       ...>             students.
       ...>             parallelStream().
       ...>             collect(Collectors.
       ...>                     groupingBy(Student::getGrade))
    studentData ==> {A=[{ 2, Peter, A }, { 7, Amanda, A }], E=[{ 10, Arnold, E }], B=[{ 3, Sally ...
    
    jshell>             
    
    jshell> for (Map.Entry<Grade, List> entry : studentData.entrySet()) {
       ...>          System.out.println("Grade: " + 
       ...>                             entry.getKey() + 
       ...>                             ", Students: " + 
       ...>                             entry.getValue());
       ...>     }
    Grade: A, Students: [{ 2, Peter, A }, { 7, Amanda, A }]
    Grade: E, Students: [{ 10, Arnold, E }]
    Grade: B, Students: [{ 3, Sally, B }, { 4, Slava, B }, { 8, Petr, B }]
    Grade: F, Students: [{ 1, Rich, F }, { 9, Susan, F }]
    Grade: C, Students: [{ 5, Megan, C }]
    Grade: D, Students: [{ 6, Edward, D }]
    

    As can be see, the Collectors class comes bundled with extremely useful methods to perform almost any conceivable processing on data. In this case, we’re still grouping the students based on their grades.

    Also note the use of “parallelStream”. Java supports parallel streams for operations that can be parallelised (i.e. they don’t have any real dependencies between one another). In this case, since we are processing a bunch of student data and aggregating them into groups based on grade, this is precisely such a situation where we can achieve performance boosts using parallel streams, especially when the data sets grow in size.

Some things to watch out for

  • Order of operations:
    Since streams provide a very high-level way to process operations, a lot of the inner details gets hidden from the developer. However, understanding some of these low-level details is crucial when working with streams. The most important part is knowing how the flow of processing takes place in streams.

    Suppose we want to find the sum of all odd numbers (doubled) between 1 and 10^6. We could do it like this:

    jshell> OptionalLong sum = 
       ...>             LongStream.iterate(1, (n) -> n+1).
       ...>                       limit(1_000_000).
       ...>                       map((b) -> b*2).
       ...>                       filter((i) -> i%2 != 0).
       ...>                       reduce((one, two) -> one + two);
    sum ==> OptionalLong.empty
    
    jshell> sum.getAsLong()
    |  java.util.NoSuchElementException thrown: No value present
    |        at OptionalLong.getAsLong (OptionalLong.java:119)
    |        at (#28:1)
    
    

    Why does this not work? Well, it’s because we are mapping each number to double its value and thereby ensuring that all the numbers are even! Let’s fix that to see that it works if we swap around map and filter:

    jshell> OptionalLong sum = 
       ...>             LongStream.iterate(1, (n) -> n+1).
       ...>                       limit(1_000_000).
       ...>                       filter((i) -> i%2 != 0).
       ...>                       map((b) -> b*2).
       ...>                       reduce((one, two) -> one + two);
    sum ==> OptionalLong[500000000000]
    
    jshell> sum.getAsLong()
    $30 ==> 500000000000
    

    Et voila! This shows how important it is to get the ordering of non-terminal operations correct since they may result in logical errors which cannot be caught by the compiler.

    Now, let’s flip the same example around: suppose we want to calculate the sum of even numbers (incremented by 2) from 1 to 10^6. Note that in this case, it doesn’t matter whether we map first and then filter or filter first and then map. The result is the same (adding 2 to an odd number always produces another odd, and so also for even number). What about performance? Let’s see:

    Let’s map and then filter:

    jshell>     void sumOfEvenNumbersSlow() {
       ...>         long start = System.currentTimeMillis();
       ...>         
       ...>         OptionalLong sum = 
       ...>LongStream.iterate(1, (n) -> n+1).
       ...>                       limit(1_000_000_000).
       ...>                       map((b) -> b+2).
       ...>                       filter((i) -> i%2 == 0).
       ...>                       reduce((one, two) -> one + two);
       ...>         
       ...>         long end = System.currentTimeMillis();
       ...>         System.out.format("Sum: %d, time taken = %.3fs\n", sum.getAsLong(), (double)(end-start)/1000);
       ...>     }
    |  created method sumOfEvenNumbersSlow()
    
    jshell>
    
    jshell>     sumOfEvenNumbersSlow()
    Sum: 250000001500000000, time taken = 24.378s
    

    Now let’s filter first and the map:

    jshell>     void sumOfEvenNumbersFast() {
       ...>         long start = System.currentTimeMillis();
       ...>         
       ...>         OptionalLong sum = 
       ...>             LongStream.iterate(1, (n) -> n+1).
       ...>                       limit(1_000_000_000).
       ...>                       filter((i) -> i%2 == 0).
       ...>                       map((b) -> b+2).
       ...>                       reduce((one, two) -> one + two);
       ...>         
       ...>         long end = System.currentTimeMillis();
       ...>         System.out.format("Sum: %d, time taken = %.3fs\n", sum.getAsLong(), (double)(end-start)/1000);
       ...>     }
    |  created method sumOfEvenNumbersFast()
    
    jshell>
    
    jshell> sumOfEvenNumbersFast()
    Sum: 250000001500000000, time taken = 22.012s.
    

    Notice the substantial performance difference? Furthermore, this performance gap will only increase as the data set size increases. This illustrates the importance of properly ordering the operations when dealing with streams. Always filter before mapping whenever the order doesn’t matter — don’t waste precious cycles doing operations whose results are going to be dropped anyway.

    Finally, another important point to note is this – for non-terminal operations such as filter and map, the way it works is that a value is generated and passed down, and this process is repeated until the entire data stream has been exhausted. How then does a terminal operation like reduce or sum handle that? They have internal mechanisms to keep collating intermediate results and then produce the entire final result in one go and return that value to the caller.

  • Debugging streams:
    In most cases, each individual operation in a long chain of stream operations is small enough that we can easily weed out bugs – logical or otherwise. When the body of each operation starts growing though, debugging becomes much more difficult. The Java compiler helps us with static type issues, but it cannot always be relied upon to pinpoint the exact issue in large bodies of code. The sane thing to do would be to always try and keep each operation limited to a single conceptual abstraction of code, and then to ensure that that abstraction can be represented in a line or two of code at most.

  • Using parallel streams:
    As could be seen in the last example, we can use parallel streams in those cases where the operations are essentially independent of one another. The thing is that the onus for determining and ensuring this is on the developer. Java will not (and cannot) check whether the operations are independent of one another or not. This means that unless well thought through, this can lead to a lot of head-scratching and confusion.

    Another potential problem with parallel streams is that debugging (which is already hard enough with parallel code) becomes even tougher with parallel streams when things don’t go as planned. This goes right back to where the whole sequence of operations is well thought through. Nothing can substitute good planning.

    Finally, the performance boost might not be very noticeable in many cases, especially for small data sets. This becomes even truer on single-core machines (which are, to be fair, becoming rarer by the day). In any case, one should carefully weigh the overheads associated with parallel streams against the purported performance benefits from using them.

  • Conclusion

    Streams are, without doubt, my favourite feature in Java 8. As the world tends to move more and more towards Functional Programming as a core paradigm (and one which is orthogonal to Object Orientation, I must add), it is increasingly becoming important for developers to know at least the fundamentals of Functional Programming – using pure functions wherever possible, avoiding side-effects, especially with regards to the modification of data structures, using higher-order functions such as filter, map, reduce, and of course, moving towards more declarative code than imperative code.

    Functional Programming is a very old concept that is being rediscovered as the world moves onto a massively mute-core environment where we just cannot afford the headaches associated with mutable state and side-effects. Of course, side-effects are essential in getting anything practical done (imagine a world without any IO!), and some languages handle that problem quite elegantly (Haskell) and others focus on rather providing efficient immutable data structures (Clojure), but one guideline that is bound to be useful whatever your language support for Functional Programming might be (or not!) is to clearly separate out the Functional and Non-Functional parts of the code and provide a clear and simple form of interaction between them. We will discuss more about Functional Programming and its core concepts in the next few posts.

A highly opinionated review of Java Lambdas

What really is a lambda expression?

A lambda expression is, for all means and purposes, an anonymous function. That really is all there is to it. In languages that support first-class functions, this is yet another feature of the language – functions are on par with other types in the language. However, in language that don’t consider functions first class, it becomes a bit of an esoteric concept.

The origin of the concept is in the Lambda Calculus first propounded by the great Alonzo Church. According to that scheme, functions are basically entities which take some (or no) parameters, and have a body of code that can use those parameters. There is essentially no side-effect in such functions. That means that the function is deterministic – given the same set of parameters, it will always produce the same output. This is, in fact, the very foundation of Functional Programming. In modern times, Functional Programming is often conflated with strongly and statically typed languages. This is clearly wrong. The original Lambda Calculus had really no notion of types! (There is a variant of it though, the typed Lambda Calculus). Most of the languages that support lambda expressions today, however, freely allow plenty of side-effects within lambda expressions. The main takeaway here though is that lambda expressions are conceptually what named functions are made out of.

Lambdas in Java 8 and how they came to be

The biggest features in Java 8 were lambda support and the Stream API. In many ways, lambdas are important only with respect to their heavy use in the stream APIs (as seen in the previous blog on Shell). The key concept to understand when learning lambdas in Java is that lambdas/functions are not first-class objects in Java. Their entire existence is strictly bound to and controlled by interfaces, specifically the concept of SAMs (Single Abstract Method) interfaces – interface which contain only a single abstract method. In my opinion, this severe crippling of lambdas in Java has created more problems than it has solved. Now new programmers who pick up Java and run with it are liable to be very confused when to move on to languages which do support lambdas in a more natural and proper manner. In any case, let’s work with what we’ve got.

So why a Functional Interface? Prior to Java 8, if we wanted to simulate cases where we wanted to pass some functionality into another function, we had to make do with anonymous classes. For instance, to create a new thread, we could have done the following:

jshell> (new Thread (new Runnable () {
   ...>    @Override
   ...>    public void run () {
   ...>     System.out.println("Hello from thread!");
   ...>    }
   ...>  })).start()

jshell> Hello from thread!

We observe that the sole purpose of the anonymous class is to perform some actions, but the cognitive dissonance comes into play when we see that the Thread class constructor experts an instance (basically a data object) of type Runnable. This is exactly the same pattern that was followed by C++ until C++11. In fact, this is what is known (rather pompously, I must add) as a functor.

Here is what the Runnable interface looks like:

public interface Runnable {
   void run();
}

This pattern of the use of a (dummy) interface containing a single method which basically does all the work that an anonymous or named function should have done in the first place, was found to be in such widespread use amongst Java developers that the committee which worked on developing lambda support in Java decided to make it kosher and provide additional support from the Java Runtime. As a result, from Java 8 onwards, wherever a SAM is present, a lambda expression can be used as its target or in its stead. They have been made essentially the same.
For example, the previous example can now be written more succinctly as:

jshell> (new Thread(() -> System.out.println("Hello from thread...again!"))).start()
Hello from thread...again!

An optional annotation, @FunctionalInterface has also been introduced for bookkeeping purposes. In fact, in order to help out developers, a bunch of Functional Interfaces now come bundled with the JDK in the java.util.function package. I would highly recommend exploring them and testing them out to get a feel for them.

Custom Functional Interfaces

We can define our own functional interface in Java 8 (and above). The only restriction for the interface to be a functional interface is, as mentioned before, is that the interface have a single abstract method.

For instance, the standard package (java.util.function) comes with functional interfaces that support single parameter (Function) and double parameter (BiFunction) functions. Let us define a triple parameter function just for this example.

jshell> @FunctionalInterface interface TriFunction<T, U, V, R> {
   ...>     R apply(T t, U u, V v);
   ...> }
|  created interface TriFunction

jshell> int x = 100;
x ==> 100

jshell> int x = 100
x ==> 100

jshell> String y = "Hello"
y ==> "Hello"

jshell> double z = Math.PI
z ==> 3.141592653589793

jshell> TriFunction<Integer, String, Double, String> f = 
          (i, s, d) -> i + s + d;
f ==> $Lambda$6/1318822808@6d7b4f4c

jshell> System.out.println(f.apply(x, y, z))
100Hello3.141592653589793

Features and Limitations of Java Lambdas

So how exactly does a SAM map onto a lambda expression? To understand this better, first we need to get the syntax and semantics of lambda expressions out of the way:

Java’s lambda syntax was clearly influenced by Scala. A basic lambda expression has the following form:

(<param*>) -> [{] <body-form+> [}]

where,

’param’ is a comma-separated list of zero or more parameters with optional types (note that in some cases where Java’s type inference mechanism is unable to infer the type, you will need to specify the type(s) explicitly), the braces are optional in the case of a single line body, but are required when the body spans more than one line. Finally, each body-form is a series of normal Java statements. In the case of multiple statements, each body form is separated by a semi-colon, and a return statement is also required in this case (if the return type is not void).
So a lambda expression that takes a String and returns a String might take on several forms in actual code:

(s) -> s.toUpperCase()

The type signature is not required in this case, and the return statement is not allowed in this case, This would be the recommend usage of a typical lambda expression – don’t declare the types and don’t use any return statement. Of course, this only works for a single-statement (or, more correctly, a single-expression) body.

In case we want to use braces, we need to have the whole expression take the following form:

(String s) -> { return s.toUpperCase() }

So we need to specify the type of the parameter(s) as well as include an explicit return statement. In all cases where the body contains multiple statements, this would be the recommended format for a lambda expression.

Now getting back to how a SAM is mapped onto a lambda expression, whenever the Java Runtime encounters a lambda expression, it can do either of two things depending on the context in which the SAM is used:

  • In case the lambda expression is used along with a Stream API function (such as map, filter, reduce, etc.), the Java Runtime already has enough context about the form of the function that is expected – the parameter types and the return type. For instance, if we are trying to double all the even natural numbers upto 10, we might do:
    jshell> IntStream
             .rangeClosed(1, 10)
             .filter((n) -> n%2 == 0)
             .map((d) -> d*2).forEach(System.out::println)
    4
    8
    12
    16
    20
    

    In this case, the Java Runtime knows that the filter method takes a parameter of the form: Predicate. The Predicate functional interface has a single method – boolean test(Test t). So what the Runtime does is to check that the provided lambda expression matches this signature, and if verified, proceeds to invoke the “test” method implicitly. Similarly for the map function as well.

  • The second case arises in the case where we make use of Functional Interfaces explicitly and then use them as the “target” of a lambda expression. For instance, suppose we want to write a function that takes a String and an Integer and returns their concatenated form as a String, we might have something like:
    jshell> BiFunction<String, Integer, String> f = 
               (s, i) -> s + String.valueOf(i)
    f ==> $Lambda$17/103887628@42f93a98
    
    jshell> f.apply("Hello", 99)
    $21 ==> "Hello99"
    

    In this case as well, the compiler will ensure the the lambda expression matches the type of the declared function variable. Pretty straightforward.

So far so good, but there is a huge problem in the second case above. The problem is that even once the function object has been created, the name of the SAM must be known before we can use it. This is because Java does not have operator overloading (unlike C++). This is why in the current framework, we must know the exact name of each functional interface that we use. The “apply” method used above is the name of the SAM in the BiFunction functional interface. The problem is compounded because each functional interface (even in the standard package) defines its own names. Of course, this is not an insurmountable problem, but the same problem did not exist even in pre-C++-11. For instance, the previous example could have been done so in C++ (using a functor):

// pre C++-11
#include <iostream>
#include <sstream>

template< typename T, typename Func>
std::string concatenate(std::string s, T t, Func f)
{
    return f(s, t);
}

class string_int_string
{
    public:
        std::string operator()(std::string s, int i)
        {
            std::ostringstream oss;
            oss << s << i;
            return oss.str();
        }
};

int main()
{
    std::cout << concatenate("Hello", 99, string_int_string()) << std::endl;
   
    return 0;
}

A bit brittle, but it works. The generic function, “concatenate” is important to note here since it can basically take any functor (or lambda expression from C++-11 onwards), and invokes the function object with the supplied arguments. The same approach is used in the C++ STL generic functions. Now if we look at how the code might look like with C++-11, we get:

// C++-11 and above
#include <iostream>
#include <sstream>

template< typename T, typename Func>
std::string concatenate(std::string s, T t, Func f)
{
    return f(s, t);
}

int main()
{
   std::cout << concatenate("Hello", 99, 
		[](std::string s, int i) {
        			std::ostringstream oss;
        			oss << s << i;
        			return oss.str();
    		}) << std::endl;
    
    return 0;
}

As can be seen, the approach is much cleaner. The difference between the functor-version and the lambda-based one is that in this case, we’ve essentially got rid of the class representing the functor object and inserted its logic inside the lambda expression’s body. So it essentially appears that the lambda expression’ is basically an object that can bind the parameters just as in the case of a regular functor.

As can be seen, even in C++-11, we can write generic functions and all we need to do it invoke it like a function. No messy SAMs there! I personally feel that C++’s lambda support is far superior to that of Java, especially since C++ supports closures. More on that in the next section.

Another disadvantage of Java’s lambda support is that the following is impossible in Java:

#include <iostream>

int main()
{
    int x = 100, y = 100;
    
    std::cout << ([x, y]() { return x + y; })() << std::endl;
    
    return 0;
}

The code above simply uses a lambda expression to capture variables defined in the outer lexical scope (more on that in the next section), but the interesting bit is that the lambda expression can be invoked like a proper function object even without assigning it to a variable.

If we tried the same in Java, we’d get an error:

jshell> int x = 1100
x ==> 1100

jshell> int y = 200
y ==> 200

jshell> (() -> x + y)
|  Error:
|  incompatible types: java.lang.Object is not a functional interface
|  (() -> x + y)
|   ^---------^
|  Error:
|  incompatible types: <none> cannot be converted to java.lang.Object
|  (() -> x + y)
|  ^-----------^

As can be seen from the error message, the Java Runtime complains that “Object” is not a functional interface. Even if we assumed that the runtime would be able to discern the functional interface type from its signature and produce a result, we still get an error:

jshell> ((int a, int b) -> { return a + b; })).apply(x, y)
|  Error:
|  ';' expected
|  ((int a, int b) -> { return a + b; })).apply(x, y)
|                                       ^
|  Error:
|  incompatible types: java.lang.Object is not a functional interface
|  ((int a, int b) -> { return a + b; })).apply(x, y)
|   ^---------------------------------^
|  Error:
|  incompatible types: <none> cannot be converted to java.lang.Object
|  ((int a, int b) -> { return a + b; })).apply(x, y)
|  ^-----------------------------------^
|  Error:
|  cannot find symbol
|    symbol:   method apply(int,int)
|  ((int a, int b) -> { return a + b; })).apply(x, y)
|                                         ^---^
|  Error:
|  missing return statement
|  ((int a, int b) -> { return a + b; })).apply(x, y)
|  ^------------------------------------------------^

So no go there. A point down for Java lambdas! More seriously, I find this to be an extremely irritating reminder that Java’s lambdas are not really lambdas. They are more like syntactic sugar for the good old anonymous classes. In fact, there are more serious implications precisely for this reason.

Closures

This is again one of those concepts that are notoriously badly explained. A lot of newbies to programming are often scared to death and put-off from learning more about Functional Programming due to unnecessary FUD on the part of many “experts” in the field. So, let’s try and explain this as clearly as possible:

In Set Theory, a set is defined to be “closed” under an operation if applying the operation to members of the set produces a result that belongs to the same set. For instance, if the set under consideration is the set of Natural Numbers (N) and the operation is + (addition), we can say Natural numbers are closed under addition. Why? The reason is quite simple and follows straight from the definition – adding any two natural numbers (or indeed any number of numbers, but we’re considering the strict binary operation here) always produces a Natural number. On the other hand, N is not closed under – (subtraction). This is because subtracting some Natural number from another Natural number might produce 0 or some negative number, which is clearly not a member of N. So much for mathematics.

In Psychology, “closure” refers to the strict need of an individual to find a definitive answer to a problem.

In business, “closure” refers to the process by which a business closes down.

You see what I’m getting at? The term “closure” is highly overloaded, and even within mathematics, the term has different meanings in different branches. So my point is this – simply forget about the name and focus on the concept.

In Computer Science, a closure is intricately tied to the concept of scoping, specifically lexical scoping. This is why closures are often referred to as “lexical closures”. In order to understand closures properly, we must clearly understand what lexical scoping entails.

Lexical scoping is intimately tied with the rules defining the lifetimes (and visibility) of variables. Dynamic scoping, in general, refers to a situation where a variable has effectively global visibility and lifetime. Pure lexical scoping, on the other hand, ensures that the visibility of variables is limited to the current lexical block (say a function or a local block), or to nested blocks. However, lexically scoped variables are not visible to outer blocks, and variables defined in inner blocks will effectively “shadow” those defined in the outer scope. If no new variables with the same name are defined in the inner block, references to variables will always refer to those in the outer scope. This behaviour forms the basis of what is known as “variable capture”.

A variable is said to be captured by a lambda function if the lambda function refers to an outer-scope variable during its time of creation. The lambda function is said to “close over” those variables, and this is the reason why this feature is called a “closure”. So what does this variable capture actually implicate in the grand scheme of things? What it implicates is this – when a lambda function captures a variable in its outer scope, the lifetime of the variable is effectively changed. Under normal circumstances, local variables die when the function is exited. In this case, however, since the lambda function has captured the variable, the variable will not die even when the function in which it was defined dies!

In this respect, Java behaves absolutely horribly. Java has extremely weird scoping rules. In some ways, it does use lexical scoping. In most respects, not:

jshell> void demoScopingInJava() {
   ...>          int x = 100;
   ...>          
   ...>          System.out.format("Function scope x = %d\n", x);
   ...>         {
   ...>             System.out.format("Function scope x (before shadowing) = %d\n", x);
   ...>             /// int x = 999 is not allowed!
   ...>             x = 999;
   ...>             System.out.format("Function scope x (after shadowing) = %d\n", x);
   ...>         }
   ...>          System.out.format("Function scope (again) x = %d\n", x);
   ...>      }
|  created method demoScopingInJava()

jshell> demoScopingInJava()
Function scope x = 100
Function scope x (before shadowing) = 100
Function scope x (after shadowing) = 999
Function scope (again) x = 999

Java does not allow any shadowing because we cannot define any new variables inside the block. Instead, all references to the variable ‘x’ are actually to the function scope variable. In this case, we are able to mutate ‘x’ from 100 to 999, but this is because the inner block is within the outer function block and the Java Runtime can therefore ensure that this variable is freed before the function exits. However, this is not allowed when are in a situation where the variable could be referenced even after the local function where it was declared exits.

For instance, if we want to implement a function that prints line numbers in an increasing order every time it is called, we might try to do something like this in Java:

jshell> Function<Void, Void> lineNumberGenerator() {
   ...>      int lineNumber = 0;
   ...>      return (n) -> 
                 { lineNumber++; 
                   System.out.format("Line number: %d\n", lineNumber); 
                   return null; };
   ...> }
|  Error:
|  local variables referenced from a lambda expression must be final or effectively final
|       return (n) -> { lineNumber++; System.out.format("Line number: %d\n", lineNumber); return null; };
|                       ^--------^
|  Error:
|  local variables referenced from a lambda expression must be final or effectively final
|       return (n) -> { lineNumber++; System.out.format("Line number: %d\n", lineNumber); return null; };
|                                                                            ^--------^

We can see, though, that modifying a variable defined in the outer scope is not allowed in the case here the code escapes the local scope. As can be clearly seen in the error messages, the variable lineNum must be declared “final” for the code to even compile (and of course, then it would fail again unless we removed the mutating statement inside the lambda function).
This is the reason why we cannot implement closures in Java – Java’s bizarre downward-percolating forced visibility of variables.

And, oh, just in case you thought this applies only to lambda blocks, it’s always been the case:

jshell> void scopingRulesTest () {
   ...>    int x = 100;
   ...>    
   ...>    (new Thread(new Runnable () {
   ...>        @Override
   ...>        public void run() {
   ...>           x++;
   ...>          System.out.println(x);
   ...>        }
   ...>      })).start();
   ...> }
|  Error:
|  local variables referenced from an inner class must be final or effectively final
|            x++;
|            ^

The same example in C++ works as expected (including modification of the outer scope’s variable):

#include <iostream>

using namespace std;

int main()
{
    ios_base::sync_with_stdio(false);
    
    int x = 100;
    cout << "Function scope x = " << x << endl;
    {
        x = 101;
        cout << "Block scope x = " << x << endl;
        int x = 999;
        cout << "Block scope x = " << x << endl;
    }
    
    cout << "Function scope x = " << x << endl;
    
    return 0;
}

sh-4.3$ main                                                                                                                                                              
Function scope x = 100                                                                                                                                                    
Block scope x = 101                                                                                                                                                       
Block scope x = 999                                                                                                                                                       
Function scope x = 101   

And to complete the line number closure demo (line number example):

// C++-11 (and above)
#include <iostream>
#include <functional>

using namespace std;

function<void()> line_number_generator()
{
    int line_num = 0;
    
    return [line_num]() mutable
            {
                line_num++;
                cout << "Line number: " << line_num << endl; 
            };
}

int main()
{
    ios_base::sync_with_stdio(false);
    
    function<void()> print_line_numbers = line_number_generator();
    
    for (int i = 0; i < 5; i++) {
        print_line_numbers();    
    }
}

sh-4.3$ g++ -std=c++11 -o main *.cpp                                                                                                                                      
sh-4.3$ main                                                                                                                                                              
Line number: 1                                                                                                                                                            
Line number: 2                                                                                                                                                            
Line number: 3                                                                                                                                                            
Line number: 4                                                                                                                                                            
Line number: 5  

Note that default variable capture is read-only in C++. However, the “mutable” keyword can be used to change that behaviour. In all respects, C++11 supports closures while Java cannot!

The Common Lisp version is pretty much identical in behaviour to the C++ one. In the case of Common Lisp, however, we have the extra implication that any references to outer-scope variable always capture the mentioned variable (unless a local variable with the same name is already defined). This is seen in the Common Lisp version of the same example:

CL-USER> (defun foo ()
           (let ((x 100))
             (format t "Function scope x = ~d~%" x)
             (progn
               (setf x 101)
               (let ((x 999))
                 (format t "Inner block x = ~d~%" x)))
             (format t "Function scope (again) x = ~d~%" x)))
STYLE-WARNING: redefining COMMON-LISP-USER::FOO in DEFUN
FOO
CL-USER> (foo)
Function scope x = 100
Inner block x = 999
Function scope (again) x = 101
NIL

This effectively ensures that a nested function that refer to the outer scope var(s), and which is then returned from the function is always a closure as can be seen from the following example (same example as the C++ one):

CL-USER> (defun line-number-generator ()
                      (let ((line-number 0))
                        #'(lambda ()
                             (incf line-number)
                            (format t "Line number: ~d~%" line-number))))
LINE-NUMBER-GENERATOR

CL-USER> (defvar print-line-numbers (line-number-generator))
PRINT-LINE-NUMBERS

CL-USER> print-line-numbers
#<CLOSURE (LAMBDA () :IN LINE-NUMBER-GENERATOR) {100439133B}>

CL-USER> (dotimes (i 5)
           (funcall print-line-numbers))
Line number: 1
Line number: 2
Line number: 3
Line number: 4
Line number: 5
NIL

Conclusion

Well, that about wraps up this rather long blog post! As can be seen from this post (as well as the JShell post and more posts to come, especially on Java Streams), lambda support in Java is an extremely welcome and necessary feature. However, in many ways, it’s a very crippled version of lambda support found in other languages, especially with regards to how closures are not supported in Java.Thankfully, most code that uses lambda expressions will be code that uses the Streams API, and as such most (if not all) the wartiness of Java’s lambdas will be effectively secreted within map, filter, reduce or some other mechanism in the Streams API.

Note: All the Java code posted in this blogpost was executed on JShell. For use in a regular Java environment, ensure to add semi-colons wherever appropriate.