«Insert Name Here»

27 April 2012

Announcing planar-graph

Filed under: Graphs,Haskell — Ivan Miljenovic @ 8:22 PM

I’ve been working on a new planar graph library on and off for just over the past year.

I realise that this might not be exactly a much-sought-after library, but I’ve been using this as a test-bed for various ideas I’ve been having for a “normal” graph library. I’m going to discuss various aspects of the design of this library and some ideas I’ve had for an extensible graph library.

What is a graph?

The standard definition of a graph is as follows:

a graph is an ordered pair G = (V, E) comprising a set V of vertices or nodes together with a set E of edges or lines, which are 2-element subsets of V (i.e., an edge is related with two vertices, and the relation is represented as unordered pair of the vertices with respect to the particular edge).

However, this definition is rather limiting: by assuming that an edge is comprised of a two-element subset of the vertices, we make it harder for us to consider it computationally: in practice we don’t have a data-structure that represents a two-element set. In practice, we instead tend to use something like (Node,Node). However, this isn’t ideal:

  • Using this representation implicitly makes graphs directed rather than undirected, unless you do a lot more bookkeeping by checking both elements of the tuple. In practice this may not be too much of a problem, as people want directed graphs, but this doesn’t make it perfect.

  • The directionality of an edge is now part of its definition rather than being a property of the edge: that is, whether the edge is meant to be directed or not should be a value that can be determined from the edge; currently all edges are directed and undirected graphs are simulated with two inverse edges.

  • Multiple edges become difficult to handle: if you want to delete an edge between node n1 and node n2 and there are three edges there, how do you know which one to delete? In practice, the answer seems to be “all of them”.

A more preferable definition was stated by W. T. Tutte in a 1961 paper:

A graph G consists of a set E(G) of edges and a (disjoint) set V(G) of vertices, together with a relation of incidence which associates with each edge two vertices, not necessarily distinct, called its ends. An edge is a loop if its ends coincide and a link otherwise.

Note that no indication is given of E being a set of two-sets: instead, a mapping exists between every e ∈ E to the two endpoints of that edge.

Planar graphs

A planar graph is a graph that can be drawn on a specified surface (usually a plane or a sphere) such that no edges intersect/cross except at their endpoints.

When considering a planar graph programmatically, we also want to take into account their embedding (i.e. where all the edges adjacent to a node are in relation to each other). As such, just using an approach of identifying edges solely by their end points fails completely if there are multiple edges. As such, using a unique identifier for each edge is preferable.

But the difficulty of endpoint identification (i.e. distinguishing between (n1,n2)) remains. As such, several implementations of planar graphs use two identifiers for each edge. More about this later.

Library implementation

As I said earlier, I’ve been using the development of this library as a way to experiment with various approaches to how to design a graph library, all of which I intend to use in a non-planar graph library.

Abstract node identifiers

Most existing graph libraries (e.g. Data.Graph and fgl) use a type alias on Int values to represent vertices/nodes. Furthermore, when considering how to create a graph, it requires that you:

  1. Explicitly come up with new node identifier for each new node;

  2. Make sure you don’t re-use an existing identifier.

Whilst this isn’t a big problem if considering a bulk creation of a graph (i.e. you have some arbitrary [a] representing the nodes and edges represented as [(a,a)], in which case a zip-based solution can be used to assign node identifiers, etc. though it would be a tad messy), it isn’t ideal for adding new nodes on after the fact and it is also open to abuse.

As such, planar-graph does not permit users to create node identifiers: the constructor isn’t exported, it isn’t an instance of Enum or Bounded, etc. Instead, you provide the label for the new node you want to add and the function returns the updated graph and the identifier for the new node. When the node identifiers are changed for some reason (e.g. merging two graphs), a function is returned that allows you to update the values of any node identifiers you’ve been storing elsewhere.

Show and Read instances are available and leak some of the internals out, but you have to really be persistent to try and abuse them to create your own identifier values: you need to explicitly call read on the String to get it to parse as the result isn’t valid Haskell code (as the instances exist solely for debugging).

Half-edges

As intimated earlier, each edge is actually represented by two half-edges: an edge from n1 to n2 is “stored” twice: one half-edge n1 -> n2 and its inverse n2 -> n1 (this also includes loops). Each half-edge has its unique identifier (which is abstract, just as with nodes) and mapping function exists that lets you determine a half-edge’s inverse.

Most graph implementations are something like a newtyped version of:

type Graph = Map Node [Node]

where each node has information on its adjacent nodes. Or, if we consider a graph with labels, we have:

type LabelledGraph n e = Map Node (n, [(Node,e)])

Instead, the definition of planar-graph looks more like (just considering the labelled version):

type PlanarGraph n e = ( Map Node (n, [Edge])
                       , Map Edge (Node, e, Node, Edge)
                       )

where a mapping exists between a half-edge identifier and the corresponding node that it comes from, the label of that half-edge, the node that it is going to and its inverse half-edge. This definition matches the mathematical ones stated earlier much more closely.

Now, this half-edge usage might be a requirement for a planar graph data structure, but it is also viable for non-planar graphs. First of all, if we wished to allow multiple edges between two nodes, then the traditional representation must be altered slightly:

type LabelledGraph' n e = Map Node (n, [(Node,[e])])

Each edge representation now keeps a list of edge labels, one per edge between the two nodes. Extra bookkeeping is required about what to do when that list becomes empty (and in fact previous fgl versions had an implementation where this list wasn’t considered at all and thus multiple edges would silently fail).

Also, consider how fgl-style graphs are implemented:

type FGLGraph n e = Map Node ([(Node,e)], n, [(Node,e)])

Here, each node stores not only the outgoing edges but also the incoming edges for efficiency reasons (otherwise we’d need to traverse all edges in the graph to determine what the incoming edges are). This also leads to possible data corruption issues as each edge label is stored twice.

However, with our half-edge implementation, neither of these points need any change: each multiple edge has its own unique identifier, and to obtain the incoming edges we just determine the inverse of all the outgoing edges (though technically this point isn’t quite valid when considering directed graphs, as planar-graph treats them differently; see the next section).

Distinguishing between structure and content

Most graph implementations conflate the structure of the graph (i.e. which nodes and edges there are) with the information that graph is representing. One example is the question of graph orientation: in fgl, a graph can be considered to be undirected if each edge is represented twice; however, it is quite possible that such a graph is not undirected but just happens to have each directed edge having an inverse (e.g. some kind of flow algorithm).

Whilst it is not fully formalised as yet, in planar-graph the orientation of a graph is dictated by its half-edge labels: in my use case that prompted the development of this library, I had a need for mixed-orientation of a graph: one half-edge pairing might have had a fixed direction on one half-edge whilst its inverse had a label of “InverseEdge“; other pairings might both have some kind of partial edge label.

But the actual edge identifiers didn’t change: I could apply a mapping function to transform all edge labels to () and thus make the graph “undirected”, but I didn’t need to change the actual structure of the graph to do so.

I believe this is a much more useful way of considering graphs, where the information that the graph represents can be found in the node and edge labels, not the identifiers.

Serialisation and encoding

I needed to be able to encode my planar graphs using various binary encodings (e.g. PLANAR_CODE, described in Appendix A here). Now, I could have written custom encoding functions from a PlanarGraph to a ByteString for every possible encoding; however, since they all follow the same basic underlying structure, I decide to utilise an intermediary list-based representation.

However, I then realised that this representation could also be used for Show and Read instances for the graphs as well as pretty-printing functions. The definition of the representation is:

[( node index
 , node label
 , [( edge index
    , node index that this edge points to
    , edge label
    , inverse edge index
   )]
)]

For Show, Read, etc. this is basically a raw dump of the graph (which means it is technically open to abuse as the internals of the abstract identifiers are accessible this way, but I had to draw the line somewhere); the deserialise function that is utilised by Read also ended up being useful for an internal function rather than manually trying to construct a graph!

For encoding/decoding of binary representations, a re-numbering of the identifiers according to a breadth-first traversal is first undertaken (as many require that the identifiers be 0 ... n-1 and for decoding the order of edges for each node is important) and then the same structure is used. A class is then used to convert the graph into this representation and then convert it to the encoding of your choice.

However, not all of the encodings require that the graph be planar: whilst a breadth-first traversal doesn’t make as much sense for non-planar graphs, the same framework could be used for other graph types.

Plans for the new graph library

I haven’t even started to prototype this, as some of the ideas I’m listing below I only started to mentally flesh out over the past few days. However, I think that it should work and will provide a useful approach to dealing with graphs in Haskell.

The root of my idea is that we often have different properties that we want graphs to hold: should it be a directed graph? What should the identifier types be? Is there any way to automatically determine all nodes with a certain label (e.g. for graph colouring)? What kind should the graph have?

The current methods of dealing with such a thing is to have a graph implementation, and just live with it. This “solution” clearly has problems, not least of which is that if you try to do anything else you have to re-implement everything yourself.

However, consider this: we have a method of generalising monads via monad transformers: why not do the same thing with graphs?

Now, I’m not the first person to think of this; Edward Kmett has already released a library that has this kind of characteristic (though his classes differ from how I’m planning on structure/distinguish them by having them more implementation-based than property-based IMO).

What my plans entail is this:

  • Define a GraphTransformer class. Not only will graph transformers be instances of this class, but so will the actual graph types (with identities for the different lift/unlift functions):

    class (Graph (SubGraph gt)) => GraphTransformer gt where
        type SubGraph gt :: *
    
        .....
    
  • The Graph class then requires that the instance type also be an instance of GraphTransformer, and has default definitions of all methods using the lift/unlift functions. These default definitions will resolve down to f = f definitions for actual graph types, but for transformers will just apply the function down the stack.

    This class only defines “getters”: e.g. determine the size of the graph, get all its nodes or edges, etc.

  • Other classes are defined as sub-classes of Graph, again using lift/unlift functions from GraphTransformer for default definitions of the methods.

  • Most classes (except for where it’s necessary, e.g. a class defining mapping functions) will assume that the graph is of kind *.

  • Most transformers will assume that the underlying graph is of kind * -> * -> * (i.e. that you can specify node and edge labels) so that you can make the transformer an instance of any class that requires kind * -> * -> *, but it should be possible to make transformers that take in a graph of kind *.

  • Because of the default definitions using lift/unlift functions, most class instances for the graph transformers will be of the form instance Graph (MyTransformer ExistingGraph a b); this means that if you want to newtype your graph transformer stack, writing the instances will be trivial (albeit rather repetitive and boring).

    As such, if a transformer only effects one small aspect of graph manipulation (e.g. a transformer that keeps a Map of node labels to node IDs so you can more efficiently look up all nodes that have a particular label), then you only need to provide explicit definitions for those classes and methods (in this case, adding and deleting nodes and looking up node identifiers based upon labels rather than filtering on the entire list of nodes).

    However, this does mean that any kind of unique operation you can think of (e.g. in the example above the ability to find all identifiers for nodes that have a particular label), you will need to create a new class and make appropriate instances for underlying types (if possible) and existing transformers (so that if you put extra transformers on the stack you can still use the improved definitions for this transformer).

  • Usage of the serialisation and encoding functionality for all graphs. This will provide Show/Read instances for graphs, pretty-printing, easy Binary instances (using the serialised form) and any available encodings specified.

    The actual method of this may change from what I’ve described above, as whilst a breadth-first traversal of a planar graph is unique up to the first edge chosen, for non-planar graphs this isn’t the case. However, for encodings that don’t assume a planar graph this shouldn’t be a problem.

  • Whilst the provided underlying graph types might use abstract node identifiers, it will not be required for instances of Graph to do so (and a transformer will be provided to let you specify your own type, that under-the-hood maps to the generated abstract identifiers). However, I can’t see a way around having edges using some kind of abstract edge identifier, as it isn’t as common to attach a unique label to each edge.

Generally, transformers will utilise the node and edge labels of the underlying graph stack to store extra metadata (e.g. directionality of the edges); this is why the transformers will typically require that the underlying type is of kind * -> * -> *. However, the question then arises: should users be aware of these underlying type transformations? For example, should this information leak out with a Show instance?

My current thinking is that it shouldn’t: the output from Show, prettyPrint, etc. should be as if there were no transformers being used. The main counter-example I can think of is to have some kind of indicator whether each listed half-edge is the “real” one or not, especially when using a transformer that makes it a directed edge (though in this case it can be solved by only listing the “primary” half-edges and not their inverses; this again works for most graph types but not planar ones, as all half-edges need to be listed for the graph to be re-created due to the embedding of edge orders).

Assuming this all works (I plan on starting playing with it next week), I think this approach will do quite well. As you’re writing your code, if you use a newtype/type alias for your graph type, you can just add or remove a transformer from your stack without affecting usage (in most cases: some transformers might change which classes a stack can be used in; e.g. a transformer that lets you specify node identifiers will require using a different class for adding nodes than one that generates and returns identifiers for you). Then at the end if you want to try and tweak performance, you can always go and write custom transformers (or indeed your own graph type if you want to merge all the functionality in to the actual type without using transformers) without having to change your code.

If this all works as I think/hope it will… ;-)

16 October 2011

graphviz in vacuum

Filed under: Graphs,Haskell — Ivan Miljenovic @ 8:49 PM

During the past week, Conrad Parker announced on Google+ (though it wasn’t a public post so I can’t seem to link to it) that he had decided to take over maintainership (at least until someone else says they want to do it) of vacuum since Matt Morrow hasn’t been seen for a while.

I decided to take the opportunity to replace the current explicit String-based (well, actually Doc-based) mangling used to create Dot graphs for use with Graphviz from vacuum with usage of my graphviz library. I’ve just sent Conrad a pull request on his GitHub repo, and I decided that this would make a suitable “intro” tutorial on how to use graphviz.

First of all, have a look at the current implementation of the GHC.Vacuum.Pretty.Dot module. If you read through it, it’s pretty straight-forward: convert a graph in an adjacency-list format into Dot code by mapping a transformation function over it, then attach the required header and footer.

Note though that this way, the layout/printing aspects are mixed in with the actual conversion part: rather than separating the creation of the Dot code from how it actually appears, it’s all done together.

There’s also a mistake in there that probably isn’t obvious: the “arrowname=onormal” part of the definition of gStyle is completely useless: there is no such attribute as “arrowname”; what is probably meant there is “arrowhead” or “arrowtail”.

Let’s now consider the implementation using version 2999.12.* of graphviz (the version is important because even whilst doing this I spotted some changes I’m likely to make in the next version for usability purposes):

{-# LANGUAGE OverloadedStrings #-}

module GHC.Vacuum.Pretty.Dot (
   graphToDot
  ,graphToDotParams
  ,vacuumParams
) where

import Data.GraphViz hiding (graphToDot)
import Data.GraphViz.Attributes.Complete( Attribute(RankDir, Splines, FontName)
                                        , RankDir(FromLeft), EdgeType(SplineEdges))

import Control.Arrow(second)

------------------------------------------------

graphToDot :: (Ord a) => [(a, [a])] -> DotGraph a
graphToDot = graphToDotParams vacuumParams

graphToDotParams :: (Ord a, Ord cl) => GraphvizParams a () () cl l -> [(a, [a])] -> DotGraph a
graphToDotParams params nes = graphElemsToDot params ns es
  where
    ns = map (second $ const ()) nes

    es = concatMap mkEs nes
    mkEs (f,ts) = map (\t -> (f,t,())) ts

------------------------------------------------

vacuumParams :: GraphvizParams a () () () ()
vacuumParams = defaultParams { globalAttributes = gStyle }

gStyle :: [GlobalAttributes]
gStyle = [ GraphAttrs [RankDir FromLeft, Splines SplineEdges, FontName "courier"]
         , NodeAttrs  [textLabel "\\N", shape PlainText, fontColor Blue]
         , EdgeAttrs  [color Black, style dotted]
         ]

(The OverloadedStrings extension is needed for the FontName attribute.)

First of all, note that there is no mention or concept of the overall printing/structure of the Dot code: this is all done behind the scenes. It’s also simpler this way to choose custom attributes: Don Stewart’s vacuum-cairo package ends up copying all of this and extra functions from vacuum just to have different attributes; here, you merely need to provide a custom GraphvizParams value!

So let’s have a look more at what’s being done here. In graphToDotParams, the provided adjacency list representation [(a,[a])] is converted to explicit node and edge lists; the addition of () to each node/edge is because in many cases you would have some additional label attached to each node/edge, but for vacuum we don’t. There is a slight possible error in this, in that there may be nodes present in an edge list but not specified directly (e.g. [(1,[2])] doesn’t specify the “2” node). However, Graphviz doesn’t require explicit listing of every node if it’s also present in an edge, and we’re not specifying custom attributes for each node, so it doesn’t matter. The actual grunt work of converting these node and edge lists is then done by graphElemsToDot in graphviz.

The type signature of graphToDotParams has been left loose enough so that if someone wants to specify clusters, it is possible to do so. However, by default, graphToDot uses the specified vacuumParams which have no clusters, no specific attributes for each node or edge but does have top-level global attributes. Rather than using Strings, we have a list of GlobalAttributes, with one entry for each of top-level graph, node and edge attributes (the latter two applying to every node/edge respectively). I’ve just converted over the attributes specified in the original (though dropping off the useless “arrowname” one). Some of these attributes have more user-friendly wrappers that are re-exported by Data.GraphViz; the other three need to be explicitly imported from the complete list of attributes (for these cases I prefer to do explicit named imports rather than importing the entire module so I know which actual attributes I’m using). I am adding more attributes to the “user-friendly” module all the time; RankDir will probably make it’s way over there for the next release, with a better name and documentation (and thus not requiring any more imports).

Now, you might be wondering how I’ve managed to avoid a (a -> String) or similar function like the original implementation had. That’s because the actual conversion uses the PrintDot class (which is going to have a nicer export location in the next version of graphviz). As such, as long as a type has an instance – and ones like String, Int, etc. all do – then it will be printed when the actual Dot code is created from the DotGraph a value.

So how to actually use this? In the original source, there’s a commented out function to produce a png image file. This is achieved by saving the Dot code to a file, then explicitly calling the dot command and saving the output as an image. Here’s the version using graphviz:

{-# LANGUAGE ScopedTypeVariables #-}

import GHC.Vacuum.Pretty.Dot
import Data.GraphViz.Exception

graphToDotPng :: FilePath -> [(String,[String])] -> IO Bool
graphToDotPng fpre g = handle (\(e::GraphvizException) -> return False)
                       $ addExtension (runGraphviz (graphToDot g)) Png fpre >> return True

(Note: The exception-handling stuff is just used to provide the same IO Bool result as the original.)

I hope you’ve seen how convenient graphviz can be rather than manually trying to produce Dot code and calling the Graphviz tools to visualise it. There are still some cludgy spots in the API (e.g. I would be more tempted now to have the graph to visualise be the last parameter; at the time I was considering more about using different image outputs for the same graph), so I appreciate people telling me how the API can be improved (including which attributes are commonly used).

25 January 2011

A crazy idea about graph visualisation

Filed under: Graphs,Haskell,linux.conf.au — Ivan Miljenovic @ 1:23 PM

I’m currently at linux.conf.au, and this morning I went to a talk by Adam Harvey entitled Visualising Scientific Data with HTML5.

Now, one of the packages I maintain is graphviz which suffices at what it does: use Graphviz to visualise graphs using static images. Despite its various problems, I keep using Graphviz because – unlike most of the flashier graph visualisation programs that I’ve found – it doesn’t require a fancy GUI just to convert a pre-existing graph into an image (admittedly, others may have library versions, but most seem to be written in Java and Python, which do not seem as useful in terms of writing Haskell bindings). However, one thing that Graphviz cannot do is let you dynamically visualise graphs, which is especially useful for extremely large graphs (e.g. call graphs).

One of the visualisation toolkits that Adam talked about was the JavaScript InfoVis Toolkit, which seemed quite nice in how you can dynamically interact with the graphs. The graphs are represented using JSON, and the format looks relatively straightforward.

So here’s my possibly crazy idea: does it make sense to create a companion library for graphviz to convert its DotRepr values into JIT-compatible JSON, possibly with some extensions to assist with the visualisation? We already have various libraries for interacting with JavaScript and JSON on HackageDB, so it may be possible to abstract most of the pain of visualising and interacting with graphs on the web into our preferred language. I’m not quite sure how to deal with incompatible/differing attribute values for Dot vs JIT’s JSON, but is this type of avenue worth considering? Such a conversion library would save having to doubly-convert graphs (in case you want static image versions of the visualisations as well).

So, how crazy am I?

24 January 2011

LCA bid process opens – Canberra at the ready!

Filed under: linux.conf.au — Ivan Miljenovic @ 10:37 AM

Disclaimers: sorry, Haskellers: no Haskell or graph theory in this blog post. Instead this is about linux.conf.au (aka LCA).

For the last several months, a small group of people in Canberra including myself have been preparing a bid for LCA 2013. This is not just to give us more time to make the conference the most awesome, froody LCA you’ve ever been to. No – 2013 is also the centenary of the founding of Canberra as the nation’s capital. It’s a very significant year for us and we’d all be thrilled if we could show the attendees of LCA our great city and Canberrans the great work the FOSS community does to improve everyone’s lives.

So we’re really stoked that the bidding process is going to be opened early, and I think it’ll lead to a really interesting competition that will result, whoever wins, in the best LCA ever!

If you’re interested in getting involved, join the mailing list!

17 January 2011

Graph labels redux and overall plan

Filed under: Graphs,Haskell — Ivan Miljenovic @ 10:49 PM

This is a continuation of my previous post on my thoughts and plans for writing generic graph classes.

Overall Idea

The overall thing I want to do with these generic graph classes is to be able to deal with the vast majority of graph-like data structures in as common a way as possible. Note that I say graph-like: I’m distinguishing here between data structures that match the mathematic definition of a graph (that is, a collection of distinguished objects, where pairs of these objects may be connected in some fashion) from what is usually considered as a graph data structure: the difference mainly arises in that we have notions of expected operations on graph data structures that may not be applicable on our graph-like data types. These operations can either be ones that are forbidden (e.g. adding a node to a static type) or partially forbidden (e.g. adding a cycle to a tree).

As such, the classes as they currently stand are mainly informational: what can we determine from this graph-like type? Do we know specific properties about it (e.g. is it an instance of the class that specifies whether or not the graph is meant to be directed)? There will, of course, be classes for graph manipulation, but I see those as secondary components: for example, it doesn’t make sense to consider using standard graph manipulation functions to add or delete values from a PackageIndex as we can’t arbitrarily add values to it.

Such a collection of classes will by necessity be subject to compromise: it is not possible to have a fully-featured set of classes that comprehensively covers every single possible type of graph-like data structure whilst also being small and easy enough to use. After all, there’s no point in writing such classes if no-one uses them because they’re too difficult!

More on graph labels

In my previous post, I said that the best way of dealing with labels is similar to the way that FGL currently does: force all graph-like types to have both node and edge labels (but not require types to have kind * -> * -> * like FGL does). A few people objected, notably Sjoerd Visscher said that labels should be optional for both nodes and edges, and ideally be part of the overall node and edge types.

In theory, this solution is great (and I actually worked for a while trying to get something like it to work). However, as I stated in the comments, it fails one notable requirement: we now have to specialise functions on graphs to whether or not the graph has labels or not, and if so which ones. Specifically, if the behaviour of a function may change depending upon whether or not labels are present, such a solution may require four implementations:

  1. No labels;
  2. Node labels only;
  3. Edge labels only;
  4. Node and edge labels.

Probably the best example I can think of for this is from my graphviz library: consider the preview function as it is currently defined for FGL graphs:

preview   :: (Ord el, Graph gr, Labellable nl, Labellable el) => gr nl el -> IO ()
preview g = ign $ forkIO (ign $ runGraphvizCanvas' dg Xlib)
  where
    dg = setDirectedness graphToDot params g
    params = nonClusteredParams { fmtNode = \ (_,l) -> [toLabel l]
                                , fmtEdge = \ (_, _, l) -> [toLabel l]
                                }
    ign = (>> return ())

This is a relatively simple function, that just sets some defaults for the main functions in graphviz. To change this to my proposed layout of compulsory labels mainly requires changes to the type signature (the only implementation change would be the way edges are defined). But with optional labels, then either four variants of this function will be required or else the user will have to specify how to distinguish the node/edge identifiers from the labels (if they exist); this latter solution is not satisfactory as the whole point of this function is to provide defaults to quickly visualise a graph, and as such should not take any other parameters apart from the graph itself.

If an “isInstanceOf” function was available (to determine whether or not the graph type is an instance of the appropriate label classes without needing to specify them as explicit type constraints), then this wouldn’t be a problem: implementers of functions would just need to take into account the four possible label groupings in their code. But as it stands, the implementation of having optional labels breaks the simplicity requirement that I’m taking into account when writing these classes.

Note that I would actually prefer to have distinct/abstract node and edge types that optionally contain labels: for the planar graph library that I’m working on all operations on edges are done via unique identifiers rather than a data-type isomorphic to a tuple of node identifiers (so as to avoid problems with multiple edges). However, for most graph types such explicit differentiation between edges won’t be required, and in general it will be simpler to both instantiate and use classes when a more simple edge type is used rather than requiring in effect a new data type for each graph (as required when using data families).

Naming and terminology

One thing I’m still not sure about: how shall I deal with the naming of functions when I have both labelled and unlabelled variants of them? Should I take the FGL route of prepending “lab” to them (e.g. nodes vs labNodes)? I’m not sure I like this solution, as I want to try and shift focus to making the labelled versions the defaults (or at least not as clumsy to use): does it make sense to adopt a policy of priming functions to distinguish between labelled and unlabelled (e.g. nodes vs nodes')? Or should some other naming policy be used?

30 December 2010

Graphs and Labels

Filed under: Graphs,Haskell — Ivan Miljenovic @ 9:20 PM

As some of you may be aware, I’ve been working on and off on a new library to define what graphs are in Haskell. This is the first part of a series on some of the thought processes involved in trying to define classes that fit the vast majority of graphs.

One of the first things I’ve been considering how to deal with in the new graph classes that I’m working on is how to deal with node and edge labels in graphs. My point of view is that graphs contain two separate but related types of information:

  1. The structure of the graph.
  2. The information explaining what the structure means.

As an example, consider graph colouring: we have the actual structure of the graph and then the colours attached to individual vertices (or edges, depending how you’re doing the colouring). Another example is a flow graph, where the distances/weights are not an actual “physical” part of the graph structure yet nevertheless form an important part of the overall graph.

Yet there are times when the extra labelling/information is an inherent part of the structure: either we are concerning ourselves solely with some graph structural problem (e.g. connected components) or – more commonly when programming – the information about the structure is embedded within the structure (for example, Cabal’s PackageIndex type: this is simplistically equivalent to an unlabelled graph with PackageIndexID as the node type).

As such, I’ve come up with at least three different ways of dealing with graph labels:

  1. A graph can choose whether or not it has node or edge labels (if I understand correctly, this is the approach taken by the Boost Graph Library for C++).
  2. A graph either has no labels or it has both node and edge labels.
  3. All graphs must have both node and edge labels (even if they’re just implicit labels of type ()).

Something along the lines of the first two options is very tempting: there is no requirement to force graphs that don’t have or need labels to pretend to have them just to fit the constraints of some class. Furthermore, different graph types can thus be more specific in terms of which graph classes they are instances of.

However, there is a problem here: duplication. Let us consider a simplified set of graph classes that fit the second criteria:

class Graph g where
  type Node g

  nodes :: g -> [Node g]

  edges :: g -> [Edge g]

type Edge g = (Node g, Node g)

class (Graph g) => LabelledGraph g where
  type NLabel g

  type ELabel g

  labNodes :: g -> [(Node g, NLabel g)]

  labEdges :: g -> [(Edge g, ELabel g)]

So if some graph type wants to be an instance of LabelledGraph, it must specify two ways of getting all of the nodes available (admittedly, it will probably have something along the lines of nodes = map fst labNodes, but wouldn’t it be nice if this could be done automatically?).

But OK, writing a set of classes and then instances for those classes is a one-off cost. Let’s say we accept that cost: the problems don’t stop there. First of all, consider something as simple as adding a node to the graph. There is no way (in general) that the two classifications (labelled and unlabelled) can share in the slightest a method to add a node, etc. Furthermore, this segregation would spread to other aspects of using a graph: almost all algorithms/functions on graphs would thus need to be duplicated (if possible). Since one of the main criteria I have for designing this library is that it should be possible to use graphviz to visualise the PackageIndex type, this kind of split is not something I think would be beneficial.

As such, the only real viable choice is to enforce usage of labels for all graphs. This might be to the detriment of graphs without labels, but I’m planning on adding various functions that let you ignore labels (e.g. a variant of addNode that uses mempty for the graph label, which means it’s usable by graphs that have () as the label type). The distinction between nodes and labNodes above could also be made automatic, with only the latter being a class method and the former being a top-level function.

This solution isn’t perfect: to ensure it works for all suitable graph types, it has to be kind *. But this means that no Functor-like mapping ability will be available, at least without really ugly type signatures (which the current experimental definition uses) at least until superclass constraints become available (or possibly some kind of kind polymorphism, no pun intended). However, this is still the best available solution that I can come up with at this stage.

3 September 2010

Test dependencies in Cabal

Filed under: Haskell — Ivan Miljenovic @ 7:51 PM

I’ve previously written about my annoyance with Hackage packages that have compulsory testing dependencies (note that I’ve since modified my position from that post, as noted by the presence of optional testing modules for graphviz). However, the situation is definitely getting better: even my old bugbear hmatrix has made the testing dependencies and modules optional by using a Cabal flag of tests.

However, several package maintainers seem to be unaware of a minor subtlety of how Cabal parses dependencies.

Let us consider a simple example: we have a package foo which is primarily a library but also contains a testing executable which uses QuickCheck. The relevant parts of the .cabal file look something like this:

...
Flag test
     Description: Build the test suite, including an executable to run it.
     Default: False

Library
    Build-Depends: base == 4.*, containers == 0.3.*
    Exposed-Modules: Data.Foo

Executable foo-tester
    if flag(test)
        Buildable: True
    else
        Buildable: False

    Main-Is: FooTester.hs

    Build-Depends: QuickCheck >= 2.1 && < 2.1.2

So, we have an optional testing executable called foo-tester and bonus points for defaulting the testing of this executable to false.

However, this doesn’t quite behave as expected: if we try to build it as-is without enabling the test flag, then Cabal will still make foo depend upon QuickCheck. Why? Because the dependency is not optional (I’m not saying that this behaviour is correct, just that this is how Cabal acts). This became noticeable when QuickCheck-2.2 came out, I upgraded to it and then ghc-pkg check complained that some packages were now broken.

I’ve pointed out the correct way of doing this to individual maintainers in the past when I noticed it in their packages; now I’m doing it in this blog post in the hope that maintainers of all affected packages will remedy this. To ensure that testing dependencies are only considered when we are indeed building the testing executable, just shift it inside the if-statement:

...
Executable foo-tester
    if flag(test)
        Buildable: True
        Build-Depends: QuickCheck >= 2.1 && < 2.1.2
    else
        Buildable: False

    Main-Is: FooTester.hs

Now QuickCheck will only be brought in when you’re building tests.

This doesn’t also apply to testing executables, but to any conditional dependencies. See for example how I have testing modules built and exported in graphviz’s .cabal file.

25 July 2010

Results of FGL naming survey

Filed under: Graphs,Haskell — Ivan Miljenovic @ 12:26 AM

Eleven days ago I set up a survey to help determine what the community thought the new version of FGL that Thomas Bereknyei and I are working on should be called. This post is about the results from this survey.

About the survey

People that took the survey were asked four things:

  • What name did they prefer: “fgl” or “inductive-graphs“;
  • Did they actually do any graph-related programming (not necessarily using FGL);
  • Any other comments they might have had;
  • Optionally, their name and email address.

Response to comments

Several people had some questions/comments regarding the survey both in the survey itself and on the Haskell Reddit. Here are some responses:

  • Why is there only the option of “fgl” and “inductive-graphs”?

    Because we couldn’t think of any better names. The former is what the library is already called, the latter describes exactly what the library is about (implementing and using graphs in an inductive fashion). Any other names such as “boxes-and-arrows” are, we feel, rather silly and don’t make sense. We did ask, but didn’t hear any other names that were relevant.

  • Why should you even consider using the name “fgl” if this is a new library?

    I don’t want to go through the whole thing all over again, but I’ll summarise. This isn’t a completely new library; it’s just a new implementation (e.g. same as going from QuickCheck-1 to QuickCheck-2; the point of the library is the same, the concepts are the same, the implementation is different and the APIs are incompatible). As for the API incompatibility, that’s what version numbers are for.

  • FGL is a silly name anyway/Acronyms are bad in a package name/The word “graph” should appear in the package name/etc.

    Agreed. However, the package name “fgl” already exists, and I don’t believe in gratuitous proliferation of package names on Hackage as its hard enough to navigate as it is. Most people in the Haskell community already know that “fgl” is a graph library, etc. Also see the response to the previous question.

  • You’re the maintainers; why are you bothering to even ask what the name should be?

    Because when we announced our plans, there was a number of vocal people that complained about our “usurpation” (my word, not theirs) of the FGL name for our own library.

  • Why are you planning on using the same module namespace as FGL even if you change the package name? Won’t that cause problems ala mtl and monads-fd?

    Say what you like about the name of the package (I for one agree that it isn’t an ideal name, especially in the world of “modern” Haskell with Hackage, etc.), I think the module namespace is exactly right. And unless we decide to skip the “Data” prefix and just have a stand-alone Graph.* namespace, there isn’t a better namespace available for a package that defines and uses graphs in an inductive fashion. Again, this is a case of “You don’t like it? Fine: pick a better name and if it truly is better we’ll use it.”.

  • If you have to change the API, please provide a compatibility API like Parsec-3 has for Parsec-2.

    This isn’t going to happen for several reasons (and these are just the ones I could think of whilst writing this blog post):

    • Even with the compatibility API, Parsec-3 was slow to be taken up. Admittedly, this was due to a performance regression, but it still doesn’t bode well for how well compatibility APIs fare.
    • Parsec-3 could have a compatibility API because they defined a new module namespace for the new API; we don’t plan or want to do that.
    • If we have a compatibility API now, we’ll be forced to keep using and maintaining it when we’d much prefer people to use the nice new shinier API instead.
    • We plan on providing upgrade paths, such as versions of fgl in the 5.x series that get closer and closer to the new API and various migration guides/tutorials.
    • Most of the function and class names are going to be pretty similar specifically to make porting easier (because of this I’m even planning on using FGL-like terminology for my currently-still-vapourware generic graph library that will eventually provide super-classes for FGL, rather than the more correct graph-theory terminology; e.g. Vertex rather than Node).

    We might have some compatibility APIs to help with the transition process (e.g. the noNodes function is going to be replaced with order, which is the proper terminology, but we might define noNodes as an alias), but these will probably be in a different module and it will still not be possible to have code that will work with both the 5.x series of FGL and the new library.

Survey results

Here is the initial overall results from the survey:

  • 66 people responded (Google Spreadsheets keeps lying to me and claiming 67, but it seems to be counting the header row as an actual response…).
  • 27 (≈ 40.9%) people said they preferred “FGL”; the other 39 (≈59.1%) prefer “inductive-graphs”.
  • 40 (≈ 60.6%) of the respondees said they wrote code dealing with graphs.
  • There were 26 (≈ 39.4%) extra comments.
  • Only 23 (≈ 34.8%) of respondees were brave enough to add their name to the response (and one of these was only a single name without an email address).

If we only consider the 40 people who claimed to write code dealing with graphs, only 16 (≈ 40%) of them preferred FGL; as such, actual usage of fgl or other graph libraries does not seem to change the overall opinion of the community (if my vague recollection of how to do statistics is correct, and this is indeed a representative sample of the community).

Other interesting tidbits

  • Martin Erwig (i.e. he-who-wrote-the-original-version-of-FGL) says we should keep using the name “FGL”, laying to rest potential problems that some people have raised.
  • Two people didn’t seem to get the point of the survey: one person indicated that they didn’t care, another made an unrelated comment regarding immature equines. However, they partially cancelled each other out: the former claimed to write graph code and voted for fgl, the latter said they didn’t write any graph code and voted for inductive-graphs.

In the raw

For those that want it, a sanitised (in that the email addresses and names have been removed) copy of the results is available (I would have hosted them on wordpress.com with the blog, but it doesn’t allow text files to be uploaded, and I don’t see the point of creating a full blown word processor document – since spreadsheets can’t be uploaded – just for some CSV data).

And so the decision is?

Well…. there isn’t one. A 60% preference is too close to even for me to categorically say that the Haskell community prefers one name over another. As such, unless someone presents a really good reason otherwise we’re going to stick with FGL (due to inertia if nothing else).

My take on all this

After all this debate, I’d like to point out that I’m more and more agreeing that “inductive-graphs” would make a much better library name. However, as I’ve stated previously (including above), I would prefer to use the “fgl” name somehow if nothing else because it’s already there (so a few years from now when – hopefully – the new graph libraries are available and widely used, we don’t have a useless library sitting around confusing people, especially when it used to be present in GHC’s extralibs and the Haskell Platform).

Yet Another Compromise Proposal (or two)

However, I just thought of two possible solutions which may be satisfactory to all involved:

What about if we call the library we’re working on “inductive-graphs”, but then create a meta-library called “FGL” that isn’t just limited to inductive graphs? That is, once I’ve worked out my generic graph API library, then we take a large subset of the modules defined in the libraries contained within this figure and re-export them from the FGL library. Such a library would be analogous to how the Haskell Platform is an initial starting point of the libraries available on Hackage: an all-in-one subset of the available graph libraries in one overall API if you don’t know which libraries to use specifically, and you can then pare down your dependencies to what you actually use.

Another alternative (which I find less attractive) is that we make the FGL library contain the generic graph API; this way the library still exists, but then it is completely different from the 5.x series. I’m mainly suggesting this just to provide another alternative; I don’t think it really makes sense or is viable.

21 July 2010

Working out the container-classes API

Filed under: Haskell — Ivan Miljenovic @ 10:34 PM

During AusHac, I worked on the container hierarchy I discussed in my previous post, which culminated in the initial release of container-classes. I had initially (and naively) thought I would have been able to whip something like this together on the Friday afternoon and spend the rest of the weekend working on graph libraries; in the end I just managed to release an initial draft version before we had to pack up on Sunday.

Now, I’m not saying this current setup is perfect; it’s basically a direct copy of all list-oriented functions from the Prelude along with a couple of functions from Data.List split into a generic Container class, a Sequence class for containers with a linear structure and Stream for infinite Sequences (i.e. lists and similar structures: those for which it makes sense to define a function like repeat).

First of all, here are a couple of design decisions I made with this library:

  • I want to be able to consider types with kind *; as such, most pre-existing classes are of no use.
  • Even when hacking together support for types of kind * -> * for mapping functions, etc. I couldn’t use Functor as it doesn’t let you constrain the type of the values being stored (for Sets, etc.).
  • To be able to have restrictions, we need to be able to specify the value type as part of the class definition. This means the usage of either MPTCs+fundeps or an Associated Type. I was initially using the latter, but due to the current lack of superclass constraints making the type signatures much uglier and longer, I switched to using MPTCs+fundeps instead.
  • Type signatures should be as short/nice as possible.
  • Provide as many default implementations as possible, and make those as efficient as possible.

However, with these design decisions there are some considerations I have to make:

  • How should I split up the various functions into type-classes? e.g. does it make sense to re-define standard classes like Foldable so that they’ll work with values of kind * (where possible) and if necessary have a constrained value type?
  • At the moment, the main constraints are all inherited from the Container class; if I have lots of smaller classes, is there a nicer way of abstracting out the constraint without duplicating it everywhere? rmonad has the Suitable, but in practice this seems to mean the addition of extra Suitable f a constraints on every function in addition; maybe this is because Suitable isn’t a superclass of the other classes though.
  • I’ve tried to define the default definitions of the various class methods with the eventual goal of implementing and using the foldr/build rule, but I’m not sure how to properly implement such a rule, let alone how well it’s going to work in practice:
    • If someone overrides the defaults to use custom/optimised versions (e.g. all the pre-defined list functions), then the inter-datatype optimisations will no longer be present.
    • As people may use more optimised variants of various class methods, any data type that extends another (using a newtype, etc.) will have to explicitly define each class instance rather than relying on the default definitions (if they want to keep using the optimised variants).
    • Cross-container optimisation could change some fundamental assumptions: e.g. going from a list to a Seq and then back to a list will typically preserve the value ordering; however if we replace the Seq with a Set then we’d expect the ordering in the final list to have changed (and be sorted); if I implement the foldr/build rule would it interfere with this ordering by removing the intermediate Set and the fact that it will insert values in sorted order?
  • Benchmarking: is there a nice way of doing per-class benchmarking to be able to compare the performance of different data structures? For example, being able to compare how long it takes to insert 100 random values into a set (by consing) compared to inserting those same values into a Set.

So, that seems to be the battle I’ve taken upon myself. I’d greatly appreciate any pointers people can give me either as comments here or by emailing me.

Oh, and a reminder: I’m going to stop collecting responses for my survey on what to call the “new FGL” library at about 12 PM UTC this Friday. I’ve already got about 60 votes; more are welcome.

14 July 2010

Data-Oriented Hierarchies

Filed under: Graphs,Haskell — Ivan Miljenovic @ 12:24 AM

In the Haskell community, there are several topics of discussion that keep coming up over and over again in terms of dealing with the hierarchies in our code. Some of these topics are:

  • Fixing the FunctorApplicativeMonad class hierarchy (however you want to structure it);
  • The best way to define and use monad transformers;
  • Making Functor more relevant; taken to the extreme by the “Caleskell” definitions used by lambdabot on IRC, where it seems almost everything can be expressed in terms of fmap.

Now, I think this kind of discussion is an indication of good health in the Haskell community where we are doing our best to determine what the optimal solution to these problems are (rather than just giving up or being dictated to by a single individual). However, something I’ve come to realise recently is that in my understanding these discussions are mainly oriented at what the best way to abstract how we write code rather than how we use the data structures that make up the code. Hence, the topic of this blog post.

My Goal

What I want to discuss here is the concept of how we can best define class hierarchies that let us easily interchange our data structures. The purpose of this is that currently, if I write some code using a list as my underlying data structure and then decide that a Sequence would be a better fit because I do a lot of appends, I have to re-write every single bit of my code that knows about that particular data structure. However, I would much prefer to just have to change a few top-level type signatures and maybe some list-specific items in my code and then the magic of type classes would take care of the rest.

Avoiding Duplication

The main focus of when such a hierarchy would be useful is when writing libraries: duplication is avoided by having to write a list-specific, a Sequence-specific and a Set-specific version of a function (e.g. to test if the data structure in question has at least two of the provided values). More than that: often times we are constrained in terms of how we use libraries by what data-type the library author preferred at the time of writing. A library function may require and then return a list, whereas we’re using Sets everywhere else. If there is no pressing reason to use a list rather than a Set, then why should it?

Is such a hierarchy already available?

There are some previous attempts at something like this, including (but not limited to):

  • Functor + Foldable + Traversable; this approach can’t deal with structures such as Sets as they require an extra restriction on the parametric type.
  • Edison can cope with Set, etc. and has a nice hierarchy between the individual sub-classes (if anything it has too many sub-classes), but is used by very few packages and has what I consider to be a few warts, such as explicitly re-exporting the data types in question in new modules, and some methods (such as strict) that really belong elsewhere.
  • collections seemed to have been another attempt at this, but never seemed to have built on any version of GHC since 6.8.
  • When you only want to consider structures with a linear structure, ListLike is available. However, it seems to be possibly over-busy.
  • Even more specialised than ListLike is IsString, the point of which is to be able to use string literals in Haskell code to define Bytestrings, etc.

The closest viable class/library to my ideal listed above would be a cross between Edison and ListLike; the former has an actual class hierarchy (to avoid duplication, etc.;) whereas the latter seems to be used more in actual practice.

My point here about a class hierarchy is this: in most aspects, any sequence (or “ListLike” data structure) can be considered a really inefficient generic collection/set: you still want to have a function to test for membership, you want to be able to add values, to know how many there are, etc. As such, definitions should be as high up in the hierarchy as possible to let functions that use them be as generic as possible in terms of their type signatures.

The Joker in the deck

There is one conflicting issue in any such hierarchy: mapping.

Ideally, we wouldn’t want to require that instances of these types have kind * -> * (so that we can for instance [pun not intended] make Bytestring an instance of these classes with a “value type” of Word8). However, as soon as we do that we can no longer specify a map function nicely.

ListLike gets around this by defining a map function that doesn’t constrain the data structure type. This means that it’s possible to write map succ with a type of ByteString -> [Word8]. Whilst this might be handy at times, it also provides possible type-matching problems if your overall definition when using them doesn’t force them to be the same (e.g.: print $ map (*2) [1,2,3,4]), and that in essence this definition of map does a complete fold over the data structure, whereas there may be more efficient versions if we can somehow specify at the type level that it must be constrained to the same data structure.

However, the problem is that technically [Int] and [Char] are two completely separate data types; as such, any map between them will require going from one type to another (since we’re not assuming kind * -> * here). It is possible to get around this, but it’s pretty ugly:

{-# LANGUAGE MultiParamTypeClasses, FunctionalDependencies, FlexibleInstances #-}

class Collection c a | c -> a where
  cons :: a -> c -> c

class (Collection (c a) a) => MappableCollection c a where
  cmap :: (MappableCollection c b) => (a -> b) -> c a -> c b

instance Collection [a] a where
  cons = (:)

instance MappableCollection [] a where
  cmap = map

In essence, the whole point of the MappableCollection class is to force the Collection instance back into having to kind * -> *. It might be better just having Collection use ListLike’s rigidMap and then leave “normal” mapping up to Functor or RFunctor (which works better with the whole “class hierarchy” concept). It’s just a shame that there’s no way of having mapping work regardless of the kind of the data type.

So, what are you going to do about this?

I’m going to make a stab at yet-another-collection-class-hierarchy this weekend at AusHac. I’m not sure how far we’ll get, but I’ll see.

Graph Hierarchies

My interest in data structure hierarchies came out my frustration at the lack of a common reference point for graph data types. Developing a base hierarchy is going to be my main focus at AusHac (the collections classes are aimed at being used within this graph library). My current plans look something like this (note that this doesn’t include extra packages providing specific instances, such as “vector-graph” or something):

That is, the actual “graph” library will also cater for other graph-like data structures, such as Cabal’s PackageIndex type. FGL (both the old and the “new” version, whatever it’ll be called) will then extend these classes to provide the notion of inductive graphs; anything that isn’t directly related to the notion of inductive graphs will be shifted down to this notion of “generic graphs”.

In terms of terminology, to ease the transition I’m probably going to stick to current FGL-nomenclature for the most part (unless there’s something horribly wrong/bad about it). So we’re still going to talk about Nodes rather than Vertices, etc.

Old FGL

As I intimated in the extended announcement for fgl-5.4.2.3, apart from bug-fixes we’re not going to work on the current 5.4 branch. The 5.5 branch will be developed so as to use the generic graph classes once I’ve got them sorted out, and then that will probably be the end of it.

New FGL

Now, this has become a rather hot topic: should a rewrite of FGL still be called FGL? I’ve covered this earlier, but I have now created a survey to try and find out what the community thinks it should be called (I did want an “other” option in the first drop-down menu, but Google Docs wouldn’t let me :( ).

Next Page »

The Rubric Theme. Create a free website or blog at WordPress.com.

Follow

Get every new post delivered to your Inbox.