An Object Explorer Written in a Map/Filter style with C# Partial Application Functions

0
55

Recently I wanted to visually inspect the structure of an object created by some service, so I wrote a function that reflects over an object and builds a tree of the various class properties and drills into List<T> collections.  Simple enough, but I thought, gee, this might be useful, I’d like to expand it to handle Dictionary<A, B>, enums, nullable property types, etc.  The main recursive function ended up being a 130 line long sort of organized mess.  Thinking about refactoring it into more manageable pieces, it occurred to me that all this code was basically an imperative implementation of map and filter operations.  So I decided to go down the rabbit hole of exploring what an implementation would look like with an emphasis on writing it in a map/filter style.  The rabbit hole got deeper when I realized that partial application of functions would come in real handy as well.  The code presented here is the result, with thanks to two bloggers:

Throughout this article, I use the term “function” instead of “method” (the latter being more typical in C#) to convey the concept that we’re doing functional programming in C#.

Things I Learned

Writing in a Functional Creates Very Abstract Code

Writing in a functional style with partial application and functions that take functions as parameters results in code that leaves you with a “what does this actually do?” feeling because everything is so abstracted out.  Which probably explains why programs written in functional programming languages are considerably more difficult to grok.

Thinking Abstractly Is Hard

The other thing I discovered was how hard it was to think at this level of abstraction.  in order to achieve high degree of abstraction one has to consider carefully the parameters of functions, especially when using functions in conjunction with partial application.  Imperative programming is usually much more concrete — it does what the statements say it does — and common practices like writing for loops, which could be abstracted out to yielding iterators, for example, are never written that way unless you’re specifically implementing an IEnumerable<T> collection.  Imperative functions are usually written without consideration of parameter order — I need to give the function x, y and z, and I expect r back.  This (which might be a good thing) restricts the level of abstraction one can achieve with partial application and currying.  For example, a repeatable pattern of “iterate and operate” and converting it into a general purpose function like

void Map<T>(Func<PropertyNode, IEnumerable<T>> iterator, Action<PropertyNode, T> action, PropertyNode pn) is hardly ever done.

Type Inference and Generics

This really exercises C# generics and type inference.  The advantage of functional programming languages is that their type inference is considerably better.  For example, in functional programming, I could probably write something like this:

let mapper = (a, b, c) => Map(a, b, c);

and the type inference engine would figure out what a, b, c and are based on its usage, for example:

action = mapper.PartialApply(DictionaryIterator, CreateCollectionChild);

In C#, you can’t write:

var mapper = (a, b, c) => Map(a, b, c);

You have to explicitly define the type, resulting in (prepare yourself):

Action<Func<PropertyNode, IEnumerable<KeyValue>>, Action<PropertyNode, KeyValue>, PropertyNode> mapper = (a, b, c) => Map(a, b, c);

This also probably explains why C# code is written in a more imperative style!

Partial Application and Currying

When used correctly, partial application and currying is the cat’s meow.  However, these are two very different concepts, which I’ll describe later.  The problem is that C# doesn’t support partial application and currying as part of the language syntax.  There are a few options:

Brain Numbing Extension Methods


public static Action<B, C> PartialApply<A, B, C>(this Action<A, B, C> action, A arg1) => (b, c) => action(arg1, b, c);

and a use case like:

action = mapper.PartialApply(DictionaryIterator, CreateCollectionChild);

Using Lambda Expressions, Part I

If you don’t like the extension method, you could write this:

action = a => mapper(DictionaryIterator, CreateCollectionChild, a);

Using Lambda Expressions, Part II

Or even better (getting rid of the mapper variable as well):

action = a => Map(DictionaryIterator, CreateCollectionChild, a);

So you see, there’s several ways to skin this cat, the last one being the most concise and readable, as long as you’re familiar lambda expressions.

Notice now that C#’s type inference works beautifully!

Now Obviously…

C# already supports mapping and filtering with select and where keywords in Linq (and fold, which uses a running accumulator, with aggregate.)   This means that my Map and Filter functions are actually quite unnecessary — they are used here mainly as a teaching exercise to formally state that an operation is a mapping operation or a filter operation.

Also, one thing I keep forgetting is the ability to invoke a function based on the implicit parameters of the lambda expression.  So, for example:

return Filter(objectProperties, prop => GetMethodCountIsZero(prop));

is simplified as:

return Filter(objectProperties, GetMethodCountIsZero);

which also is more in line with functional programming style.

Three Things I Like About the End Result

The thing that stands out the most is that the resulting code consist of functions that, if not one-liners, are typically 5 lines or less! 

The second thing I like about this approach is that it completely eliminated if-then-else code, leveraging C# 7’s case ... when syntax.  Pattern matching, particularly when used in such a way that every case returns rather than breaks, enforces the implementation of the default, or final else, case, resulting in more robust code, in my opinion.

The third (but not least) thing I like about this approach is, especially when using pattern matching as described above and in conjunction with mapping and filtering, is that your your code, being more robust, is less likely to throw exceptions.  This is a huge benefit of a more function programming style!  If your code does throw an exception, it’s because you haven’t handled a case correctly (well, that’s obviously true with imperative code as well), it’s just that functional programming makes the handling of all cases more explicitly conscious.  You have to work with it for a while to believe that statement.

Partial Application vs. Currying

Most of the examples you’ll see on the regarding partial application and currying (even in F#) often talk about simple functions that take two parameters.  This is sort of pointless because when you partially apply one parameter or curry one parameter, the call to resolve the operation is identical, so you really don’t get an appreciation of the syntactical difference.  I, however, will avoid that by demonstrating the differences between partial application and currying using three (wow!) parameters. 

Partial Application

Very simply, partial application lets you assign the first n parameters, returning a function that takes the rest.  Given a function that returns the sum of three values:

Func<int, int, int, int> add = (a, b, c) => a + b + c;

You can partially apply one or two parameters:

Func<int, int, int> needsTwoMoreParams = (b, c) => add(3, b, c);
Func<int, int> needsOneMoreParam = (c) => add(3, 4, c);

Using these partial application functions looks like this:

int twelve = needsTwoMoreParams(4, 5);
twelve = needsOneMoreParam(5);

Notice we have to explicitly define the return type of the partial application function.  We cannot use var!

var oneParam = (b, c) => add(3, b, c);

Cannot assign lambda expression to an implicit-typed variable.

In a functional programming language like F#, the type is inferred:

> let add a b c = a + b + c;;
val add : a:int -> b:int -> c:int -> int

> let add3 = add 3;;
val add3 : (int -> int -> int)

> let add3and4 = add 3 4;;
val add3and4 : (int -> int)

The key thing to note with partial application in C# is the construct (b, c) for the remaining parameters in the needsTwoMoreParams function.  This is what tells you it’s a partial application rather than currying!

You can do the same thing with functions that are not C# Func types but Action types:

Action<int, int, int> xadd = (a, b, c) => Console.WriteLine(a + b + c);
Action<int, int> xoneParam = (b, c) => add(3, b, c);
Action<int> xtwoParams = (c) => add(3, 4, c);

Currying

As Jon Skeet so eloquently wrote: “currying effectively decomposes the function into functions taking a single parameter.”  Study this example carefully:

Func<int, Func<int, Func<int, int>>> curried = a => b => c => a + b + c;
int twelve = curried(3)(4)(5);
Func<int, Func<int, int>> oneParam = curried(3);
Func<int, int> twoParams = curried(3)(4);
twelve = twoParams(5);

Note how the parameters of the curried function are called as separate parameters: (3)(4)(5).  Also note that the function is defined with a => b => c=>, which tells you this is a curried function!

Interestingly, we can use var in this case:

var oneParam = curried(3);
var twoParams = curried(3)(4);

Again however, the syntax in C# for defining the curried function must be explicit – “a function that takes an int that returns a function that takes an int and returns a function that takes an int and returns an int.”  This gets ugly real quick, and again, in F#, the inference engine knows you’re doing currying:

> add(3);;
val it : (int -> int -> int) = <fun:it@9>

> let add3 = add(3);;
val add3 : (int -> int -> int)

> let add3and4 = add(3)(4);;
val add3and4 : (int -> int)

> add3and4(5);;
val it : int = 12

Also notice this in F#:

add3(4, 5);;
add3(4, 5);;
-----^^^^

stdin(13,6): error FS0001: This expression was expected to have type
'int' 
but here has type
''a * 'b' 

The correct syntax, in F# looks just like C# – one parameter per “function”:

> add3(4)(5);;
val it : int = 12

Again, once you create a curried function, you have to treat it like a curried function, same as in C#.

What about curried functions of an Action?  It would look like this:

Func<int, Func<int, Action<int>>> curried = a => b => c => Console.WriteLine(a + b + c);
var oneParam = curried(3);
var twoParams = curried(3)(4);
twoParams(5); 

Note how the final result of last Func is an Action that takes the last value.

So now hopefully you have a clear understanding of the difference between partial application and currying.

The Code

Let’s get started with the topic of this article, an “instance explorer.”  To recap, the basic idea is to take an object (an instance of something) and create a tree, recursively, of all its properties.

Extension Methods

First, some useful extension methods which should be self-explanatory (though why Prepend exists will be explained later.)

public static class ExtensionsMethods
{
 public static string Quote(this string src)
 {
 return "\"" + src + "\"";
 }

 public static string Prepend(this string src, string pre)
 {
 return pre + src;
 }

 public static void ForEach<T>(this IEnumerable<T> items, Action<T> action)
 {
 foreach (T item in items) action(item);
 }

 
 
 public static string ToCsv(this IEnumerable<object> collectionToConvert, string separator = ", ")
 {
 return String.Join(separator, collectionToConvert.Select(o => o.ToString()));
 }

 public static string FriendlyName(this Type type)
 {
 if (type.IsGenericType)
 {
 var namePrefix = type.Name.Split(new[] { '`' }, StringSplitOptions.RemoveEmptyEntries)[0];
 var genericParameters = type.GetGenericArguments().Select(FriendlyName).ToCsv();
 return namePrefix + "<" + genericParameters + ">";
 }

 return type.Name;
 }

}

FriendlyName is nifty little function that recurses into generic parameters and turns then into the familiar syntax that you use in code.  Credit goes to “Phil” and the cool use of StringSplitOptions and Select(FriendlyName).

Tree Nodes and Containers for Nodes

Each node of the property tree is handled by:

public class Node
{
 public string Name { get; set; }
 public string Value { get; set; }
 public string PropertyName { get; set; }
 public Node Parent { get; set; }
 public List<Node> ChildNodes { get; protected set; }

 public string DisplayText { get { return Name + Value?.Prepend(" : "); } }

 public Node()
 {
 ChildNodes = new List<Node>();
 }

 public string TreeToString(StringBuilder sb = null, int indent = 0)
 {
 sb = sb ?? new StringBuilder();
 sb.Append(new String(' ', indent));
 sb.AppendLine(DisplayText);
 ChildNodes.ForEach(n => n.TreeToString(sb, indent + 2));

 return sb.ToString();
 }
}

A node handles everything it needs to know about itself, its children, and its parent.  It can also create a recursively indented string of all the properties of an object, which I basically just used to dump the output to the console, like this:

List<int> list = new List<int>() { 1, 2, 3 };
root = Parser.CreateInstanceTree(list);
Console.WriteLine(root.Node.TreeToString());
Console.WriteLine();

resulting in:

I also created a wrapper for each node that associates the actual object and object type:

public class PropertyNode
{
 public object Object { get; set; }
 public Type ObjectType { get; set; }
 public Node Node { get; set; }
}

I’m not sure this adds much value — I had originally intended to keep a clean separation between the object and the node in the tree by using a container class that associates the two.  It’s overkill for something as simple as this, and Node ended up much more specialized than I had intended anyways, so putting Object and ObjectType would simplify a bad design but it would still be a bad design, haha.  However, this all just an aside to the main point of the article.

Creating the Tree

As the above example illustrated, creating the tree is a simple call, passing in the object that you want turned into a tree of properties, for example:

root = Parser.CreateInstanceTree(new List<int>() {1, 2, 3});

CreateInstanceTree simply does this:

public static PropertyNode CreateInstanceTree(object obj, PropertyNode parent = null, string propertyName = null)
{
 PropertyNode pn = ObjectToPropertyNodeTransform(obj, parent, propertyName);
 ObjectTypeToActionTransform(pn);

 return pn;
}

There’s a few things to note here:

  1. The idea that this is a recursive function is completely invisible just by glancing at this function.  You might glean the idea that something here is recursive, given that it creates a tree, but what?
  2. I’m explicitly naming the functions …Transform because I really want to hammer home the idea here that there’s only three things going on in this code:
    1. Simple transformations of an object of type A to an object of type B
    2. Mapping of a collection to either another collection (possibly of a different type) or to “actions.”
    3. Filtering a collection, potentially resulting a smaller collection of the same type.
  3. As a result, some of this code is slightly more verbose, but the advantage to being a bit more verbose is that the code is much more semantically expressive–the function tells you what the code does, rather than having to read the code to figure out what the code does.

The third leg that has purpose in this example is “reduce” (aka “fold”) which, as mentioned before, behaves most commonly as an aggregator, using a running accumulator.

ObjectToPropertyNodeTransform

This is a fancy name for instantiating a PropertyNode!

private static PropertyNode ObjectToPropertyNodeTransform(object obj, PropertyNode parent = null, string propertyName = null)
{
 Node node = CreateNode(parent?.Node);
 node.Name = PropertyNameOrObjectTypeName(propertyName, obj);
 parent?.Node.ChildNodes.Add(node);

 return new PropertyNode() { Node = node, Object = obj, ObjectType = obj?.GetType() };
}

The point here is that we often don’t think of object construction as transformation, but that’s exactly what it is–we take some values and build an object from those values.  Here, for example, we’re transforming a set of inputs into a container that associates a Node with the object.  If taken literally, this is a transform:

Node node = CreateNode(parent?.Node);

and this is a transform:

node.Name = PropertyNameOrObjectTypeName(propertyName, obj);

I believe we all too often fail to think of code as transformations, which has the effect of reducing the readability of the code (it’s semantic value, if you will) as was as affecting reusability and testing.

ObjectTypeToActionTransform

This function is actually the meat and potatoes of the whole process, as it determines what action to take based on the object type.  Here we’re using C# 7’s pattern matching syntax.  This is the only function in the whole code base that is longer than 5 lines of code:

private static void ObjectTypeToActionTransform(PropertyNode pn)
{
 switch (pn.ObjectType)
 {
 
 case null:
 SetNullValue(pn);
 return;

 
 case var objType when IsDictionary(objType):
 ActionMap(DictionaryIterator(pn), c => CreateCollectionChild(pn, c));
 return;

 case var objType when IsList(objType):
 ActionMap(CollectionIterator(pn), c => CreateCollectionChild(pn, c));
 return;

 case var objType when IsPrimitive(objType):
 SetValue(pn);
 return;

 case var objType when IsString(objType):
 SetQuotedValue(pn);
 return;

 case var objType when IsEnum(objType):
 SetValue(pn);
 return;

 default:
 ActionMap(PropertyIterator(pn), c=> CreatePropertyChild(pn, c));
 return;
 }
}

As an aside, while I usually prefer a single point of exit from a function, having each case explicitly return has the benefit of enforcing a default handler, which in my opinion improves the code quality by requiring you to deal with (and think about) all the cases, including the default case, that need to be handled.

The pattern matching capability in C# 7.0 makes for much more concise code and in fact, I think it eliminates the need for “if-then-else” statements entirely.  Previously, something like this was not possible:

case typeof(IDictionary):

A constant value is expected

instead, tests with non-constant values would have to be done with if statements.

Instead, we can now test cases with expression, which are handled by the following implementations:

private static bool IsPrimitive(Type t)
{
 
 
 return t.IsPrimitive;
}

public static bool IsEnum(Type t)
{
 return t.IsEnum;
}

private static bool IsString(Type t)
{
 return t == typeof(string);
}

private static bool IsDictionary(Type t)
{
 return t.GetInterfaces().Any(i => i == typeof(IDictionary));
}

private static bool IsList(Type t)
{
 return t.GetInterfaces().Any(i => i == typeof(IList));
}

Obviously these could have been coded directly into the case statements, for example:

case var objType when objType.GetInterfaces().Any(i => i == typeof(IDictionary)):

At the risk of repeating myself, I like the improved semantics of “the function tells you what it does, not the code” that is achieved with calling a function.

Another aside, because the code performs a switch on the object type, not the object, I can’t do this:

case IDictionary d:

The switch statement operates on the object type is that various attributes of the type can be inspected, as the code above illustrates.

SetValue, SetQuotedValue, SetNullValue

These three methods, depending on the object type, set the value of the property for the node:

private static void SetValue(PropertyNode pn)
{
 pn.Node.Value = pn.Object.ToString();
}

private static void SetQuotedValue(PropertyNode pn)
{
 pn.Node.Value = pn.Object.ToString().Quote();
}

private static void SetNullValue(PropertyNode pn)
{
 pn.Node.Value = NULL_VALUE;
}

Again, one-liners that describe in the function name what they do.

ActionMap

This is where the fun really lies, and this is where the recursion occurs.  ActionMap is a function that takes an enumerable and an action, iterates over the enumeration, calling the action function for each item in the enumeration:

public static void ActionMap<T>(IEnumerable<T> iterator, Action<T> action)
{
 iterator.ForEach(pi => action(pi));
}

The key here is what you pass in for the iterator and action! 

This is Where Partial Application Fits In

There are two recursive processes:

  1. Collection of objects.
  2. Properties of an object.

Both have an action associated with how to process the item being iterated (collection or property.)  For both functions, the first parameter is the same, a PropertyNode, and the second parameter varies based on the type of item being iterated:

CreateCollectionChild(PropertyNode pn, object item)
CreatePropertyChild(PropertyNode pn, PropertyInfo pi)

By using partial application, we can specify the PropertyNode object, and return a delegate that has one parameter, either object or PropertyInfo.  This delegate matches the signature of the action parameter of ActionMap:

Action<T> action

By “careful” planning of the parameter order of the methods that handle the iterator objects, we can use partial application to provide the common parameters and let another function determine the final parameter.  And the beauty of even C#’s type inference is that in the function definition:

ActionMap<T>(IEnumerable<T> iterator, Action<T> action)

Type inference figures out what <T> is!

Processing an IList

Let’s look at how an IList is handled.  Given:

ActionMap(CollectionIterator(pn), c => CreateCollectionChild(pn, c));

The first parameter is actually a function call that returns IEnumerable<object>:

private static IEnumerable<object> CollectionIterator(PropertyNode pn)
{
 foreach (var obj in (IEnumerable)pn.Object)
 {
 yield return obj;
 }
}

Simple enough.  The second parameter c => CreateCollectionChild(pn, c), however, is the partial application of the method that processes each item in the enumeration:

private static void CreateCollectionChild(PropertyNode pn, object item)
{
 CreateInstanceTree(item, pn);
}

Yet again, it’s a one-liner, and this is where the recursion occurs!

Processing an Object

Processing a non-value type (other than string) is similar.  Given:

ActionMap(PropertyIterator(pn), c=> CreatePropertyChild(pn, c));

We have a different iterator that reflects over the properties that aren’t indexers:

private static IEnumerable<PropertyInfo> PropertyIterator(PropertyNode pn)
{
 var objectProperties = pn.ObjectType.GetProperties(BindingFlags.Instance | BindingFlags.Public);

 return Filter(objectProperties, IsNotIndexer);
}

where Filter does this:

private static IEnumerable<T> Filter<T>(IEnumerable<T> props, Func<T, bool> filter)
{
 foreach (var prop in props)
 {
 if (filter(prop))
 {
 yield return prop;
 }
 }
}

and another one-liner:

private static bool IsNotIndexer(PropertyInfo prop)
{
 return prop.GetMethod.GetParameters().Count() == 0;
}

This could have been replaced with a Linq where clause, but again, for the purposes of this article, I want to explicitly demonstrate that this is a filter operation with an explicit filtering function.

Each of the PropertyInfo items that ActionMap processes is handled again by the partial method that we defined with c=> CreatePropertyChild(pn, c):

private static void CreatePropertyChild(PropertyNode pn, PropertyInfo pi)
{
 object val = pi.GetValue(pn.Object);
 CreateInstanceTree(val, pn, pi.Name);
}

Processing an IDictionary

Given:

ActionMap(DictionaryIterator(pn), c => CreateCollectionChild(pn, c));

We’ve already seen what CreateCollectionChild does, and again note the partial application.  It’s actually the DictionaryIterator that’s more interesting here:

private static IEnumerable<KeyValue> DictionaryIterator(PropertyNode pn)
{
 IDictionary dict = (IDictionary)pn.Object;

 return DoubleEnumeratorMap(dict.Keys.GetEnumerator(), dict.Values.GetEnumerator(), CreateKeyValue);
}

Here we use a specialized map that takes two enumerators and iterates over both of them at the same time, like this:

public static IEnumerable<R> DoubleEnumeratorMap<R>(IEnumerator e1, IEnumerator e2, Func<object, object, R> map)
{
 while (e1.MoveNext())
 {
 e2.MoveNext();
 yield return map(e1.Current, e2.Current);
 }
}

The point of the mapper is to convert dictionary key-value pairs (I don’t use the generic KeyValue<K, V> class so I can maintain compatibility with non-generic classes that implement IDictionary) into my own KeyValue object:

public class KeyValue
{
 public object Key { get; set; }
 public object Value { get; set; }
}

We use the concrete class KeyValue instead of an anonymous object simply because it makes the tree display look better, as the intermediate key-value tree node’s text display will say “KeyValue.”

The CreateKeyValue is a transformation that is ridiculously simple:

private static KeyValue CreateKeyValue(object key, object val)
{
 return new KeyValue() { Key = key, Value = val };
}

Again, this could have been coded inline as a lambda expression:

return DoubleEnumeratorMap(
 dict.Keys.GetEnumerator(), 
 dict.Values.GetEnumerator(), 
 (k, v) => new KeyValue() { Key = k, Value = v });

That’s It!

That’s really all there is to it — the code is semantically expressive, the functions are small, and partial application is used to create a generic action mapper.

Populating the tree view is a simple exercise in recursion:

protected void PopulateTreeView(TreeNodeCollection tvNodes, Node node)
{
 TreeNode childNode = tvNodes.Add(node.DisplayText);
 node.ChildNodes.ForEach(child => PopulateTreeView(childNode.Nodes, child));
}

Conclusion

Why Are All The Class Methods Static?

Mike Hadlow has a great post on programming entirely with static methods, which is well worth the read.  The methods in my parser class are all static because there is no state maintained by the class.  This is one of the wonderful things about functional programming style.   Obviously, there are of course times when state needs to be managed (we live in a state-full world, after all), but for something like this, state is completely unnecessary.  As Mike writes:

“I guess if I look at my programming career, it has the following progression:

Procedural –> Object-Oriented –> Functional

The OO phase now looks like something of a detour.

C# has all the essential features you need for functional programming – higher-order functions, closures, lambda expressions – that allow you to entirely ditch the OO programming model. This results in more concise, readable and maintainable code. It also has a huge impact on unit testing, allowing one to do away with complex mocking frameworks, and write far simpler tests.”

My experience completely concurs with this.

Could This be Programmatically Generated?

The subtext to this article is the question I was asking myself, “how would I diagram this so that the code could potentially be automatically generated?”  I put together the above high level block diagram, and it seems that if “something” new what each of the boxes meant, the code for this could have been auto-generated.  Of course, it would probably look a lot different than what I wrote!  Nevertheless, it’s a question that continues to intrigue me as to how to diagrammatically (without descending to the level of a flowchart) express the code.  I personally believe that object-oriented techniques are the wrong way to go — a more functional programming style and using match expressions is actually more intuitive and expressive when it comes to translating a diagram to code.

About The Code Download

The code download consists of four projects:

  1. InstanceTreeMapper – This was my first cut of the code, if you look at Mapper.cs, the PopulateProperties method, you’ll see the 130 line mess that I created.
  2. MapperUnitTests – these are unit tests that verify the tree created by the InstanceTreeMapper project.  You’ll notice a couple things here, such as indexing collections, that is not in the code I presented in this article.
  3. MapReduceFilterExample – that’s the project related to this article.  If selected as the startup project, it dumps a test tree to the console.
  4. SampleInstance – my original test object graph test case.
  5. WinFormDemo – A WinForm tree.  This renders the tree using the code in this article.  In CreateInstanceTree (DemoForm.cs) switch the commented out code to see the rendering of the different algorithms.  Notice the kruft in PopulateTreeView for the rendering in my original take.

Some Further Reading

Map/Reduce: https://en.wikipedia.org/wiki/MapReduce

A MapReduce program is composed of a Map() procedure (method) that performs filtering and sorting (such as sorting students by first name into queues, one queue for each name) and a Reduce() method that performs a summary operation (such as counting the number of students in each queue, yielding name frequencies). The “MapReduce System” (also called “infrastructure” or “framework”) orchestrates the processing by marshalling the distributed servers, running the various tasks in parallel, managing all communications and data transfers between the various parts of the system, and providing for redundancy and fault tolerance.

List Comprehension: https://en.wikipedia.org/wiki/List_comprehension

A list comprehension is a syntactic construct available in some programming languages for creating a list based on existing lists. It follows the form of the mathematical set-builder notation (set comprehension) as distinct from the use of map and filter functions.

Fold: https://en.wikipedia.org/wiki/Fold_(higher-order_function)

In functional programming, fold (also termed reduce, accumulate, aggregate, compress, or inject) refers to a family of higher-order functions that analyze a recursive data structure and through use of a given combining operation, recombine the results of recursively processing its constituent parts, building up a return value. Typically, a fold is presented with a combining function, a top node of a data structure, and possibly some default values to be used under certain conditions. The fold then proceeds to combine elements of the data structure’s hierarchy, using the function in a systematic way.

Is Map/Reduce/Filter Turing Complete?

https://www.codeproject.com/Lounge.aspx?msg=5408388#xx5408388xx
https://swizec.com/blog/are-map-reduce-and-filter-turing-complete/swizec/6341

LEAVE A REPLY