What’s Your Tweet?

twitterIf you had to give a single, pithy piece of advice to a junior programmer today, what would it be?

 

 

Some possibilities…

  • Test First – TDD, SDD, …
  • Structure Everything
  • Avoid the Heap – use a stack-based discipline
  • Hop Around – gain breadth
  • Stay Put – gain depth
  • Read Voraciously
  • Specialize – Mobile, SOA, Big Data, Concurrency, …
  • Generalize – All of the above
  • Embrace and Extend
  • Don’t Be Evil
  • Learn Idiomatic Style
  • Lawyer Up – C++, C#, Java, Ruby, Haskell…
  • Be Polyglot – A Language Per Year
  • Be Multi-Paradigm – OO, FP, Prototype…
  • Grok TMP, Monads, …
  • Embrace Scrum, KanBan, XP, …
  • Eschew Scrum, KanBan, XP, …
  • Contribute to the Community
  • Share – Teach, Mentor, Speak, Publish

Since good programmers appear to be born and not made, maybe the best advice is to “Know Thyself”. Unless you have, or have a way to develop, an intuition for good programming practice, you may be engaged in a Sisyphean task.

Syncing Enums and Arrays

puzzleHere is a basic technique that may not be familiar to all…

Often, the need arises to define a symbolic enumeration, along with a “companion” array of complementary data such as string names.  With default compiler-assigned values, the enumeration serves as a convenient index into the companion array, as in the following example:

// Enumeration and companion array, often 
// accompanied by a comment like this:
// WARNING: Must keep ColorEnum and ColorNames in sync!
enum ColorEnum
{
	Red,
	Green,
	Blue
};

static const char * ColorNames[] = 
{ 
	"Red",
	"Green",
	"Blue"
};

void PrintColor(ColorEnum c)
{
	cout << ColorNames[c];
}

The risk of this technique is in maintaining coordination between symbols and slots - both the sequence and the count.  Beyond that, there is violation of the DRY principle ("don't repeat yourself"), creating more opportunities for things to get out of sync.

The solution is to use a pair of macros.  The first, which we'll call sequence, defines an abstract sequence of elements.  The second, which we'll call select, defines the attributes of each element.  Often, the function of the select macro is simply to select one or more entities from the abstract definition in sequence.  But the capability is general and can be used to define any sequence, including enums, static C-style arrays, static const members, parse tables, etc.

Following is a code sample demonstrating the basic idea.

// Define abstract sequence of elements, each of which 
// programmatically "selects" appropriate attributes
#define sequence \
	select( Red, 0xFF0000 ) \
	select( Green, 0x00FF00 ) \
	select( Blue, 0x0000FF )

// Generate symbolic enum: Red, Green, Blue
#undef select
#define select(symbol, value, ...) symbol,
enum ColorEnum { sequence };

// Generate color name sequence: "Red", "Green", "Blue"
// Indexed by ColorEnum above
#undef select
#define select(symbol, value, ...) #symbol, 
static const char * ColorNames[] = { sequence };

// Generate color value sequence: 0xFF0000, 0x00FF00, ...
// Indexed by ColorEnum above
#undef select
#define select(symbol, value, ...) value,
static int ColorValues[] = { sequence };

// Generate color struct sequence: {"Red", 0xFF0000}, {"Green", 0x00FF00}, ...
// Indexed by ColorEnum above
// Note the definition of select as a variadic macro, 
// to conveniently pick up all remaining element values.
#undef select
#define select(symbol, ...) { #symbol, __VA_ARGS__ }, 
static struct Color
{
	const char * symbol;
	int value;
}
Colors[] = { sequence };

// Define public static color symbols with numeric values
#undef select
#define select(symbol, value, ...) static const int symbol = value;
struct Colors { sequence };

For a deeper dive into code-gen with the preprocessor, I recommend reading Appendix A An Introduction to Preprocessor Metaprogramming from the excellent "C++ Template Metaprogramming" by David Abrahams and Aleksey Gurtovoy.

 

C++ Extension Methods

MP900386081

Here’s another wish for C++14:  Extension Methods.

I’m a huge fan of these in C#.  In case you’re not familiar, I’ll give you a quick introduction.  An extension method is simply a free function that permits the interface of a class to be extended outside the class definition.  This is achieved through a magical bit of syntactic sugar.  In C#, extension methods are declared static, with the first parameter representing the object to be extended by the method.  Intuitively, the keyword ‘this‘ is used to mark this parameter.  Here is an example:

    
public static class StringExtensions
{
    public static bool IsNullOrEmpty(this string s)
    {
        return string.IsNullOrEmpty(s);
    }

    public static int ParseInt(this string s)
    {
        return s == null ? 0 : int.Parse(s);
    }
};

void UsingExtensionMethods()
{
    string s = "42";

    // without extension methods
    var isNull = string.IsNullOrEmpty(s);
    var intVal = int.Parse(s);

    // with extension methods
    isNull = s.IsNullOrEmpty();
    intVal = s.ParseInt();

    // the syntactic equivalent, also permitted by the user
    isNull = StringExtensions.IsNullOrEmpty(s);
    intVal = StringExtensions.ParseInt(s);
}

Okay, so there’s a marginal improvement in compactness and readability, so what’s the big deal? I’ll tell you. Extension methods are all about tooling support. Here’s what the editing experience looked like as I typed the code above into my IDE:

ExtMethods

Here, all I could remember was that I needed a method on the string class that had the word “null” in it somewhere.  And that’s all I needed to remember:  any modern IDE offers autocomplete, which is a great productivity boost.  It saves typing and it saves remembering.  The really clever thing about extension methods in .NET is that they afford an expando-like capability to strongly-typed compiled languages – something usually associated solely with dynamic languages, like Javascript or Ruby.

Koenig to the Rescue?

Some have argued that C++ does not need this feature, and that Argument Dependent Lookup is sufficient, but this misses the point of extension methods.  When we program in OO languages, the starting point for every new line of code is almost always an object.  We type in an “argument” (a value/reference/pointer to an instance of a type), usually with some help from autocomplete.  Then, we type in a member (method/property/event), again with help from autocomplete.  On the other hand, if I begin by typing the function, everything’s backwards.  Rather than scoping to the argument in question, I am considering everything that is in scope at the moment.  And there is no support the IDE can give me.  According to Anders Hejlsberg, that is precisely the reason LINQ queries buck the SQL approach and begin with the “from” keyword, rather than the “select” keyword – to support Visual Studio’s autocomplete feature (Intellisense).

Subclassing, Anyone?

Others have suggested the tried and true OO solution: subclassing.  However, this technique does not scale at all.  First, it cannot work with built-in primitive types like int.  Second, it doesn’t work for many UDTs (e.g., those with private constructors, non-virtual destructors, etc).  Third, it doesn’t support “optional” (nullable) arguments.  Fourth, it interacts poorly with code maintenance.  Given an existing body of code using a given type, say std::string, if I want to inject a new method foo() on that type, I have to subclass it, creating stringex.  Now, I have to chase all references to std::string through the codebase, replacing them with stringex.  At least, I’d have to do this as deep and far as I’d ever expect to need to call foo (and who can anticipate that?).  All remaining instances would simply be sliced back down to std::string.  Now imagine iterating that process for future maintenance cycles.  Hmmm.  Much has been written on the topic of abusing subclassing, by Herb Sutter, Scott Meyers and others.  In the end, it’s not a viable implementation strategy for extension methods.

Operator, Give Me Long Distance

Interestingly, C++ already comes very close to this capability when defining free binary operator functions, such as ostream & operator<<(ostream & os, string & s).  In this case, the compiler automatically permits a more natural infix syntax for such operators.  When combined with returning the first argument back out of the operator, chained fluent-style programming can be achieved, like so:  cout << str1 << str2 << str3.  The standard library’s IO manipulators go one step farther, defining << operators that take a free function, such as endl, permitting call chains like cout << str1 << endl.

Some (like Boost) have put all of this behavior to use by designating an arbitrary, infrequently-used operator to do the work of invoking arbitrary extension method functors.  In a nutshell, here’s how it looks:

template<typename extension>
string & operator|(string & s, extension method )
{
	method(s);
	return s;
}

function<string(string &)>
Left(int count)
{
	return [=](string & s)
	{
		return s.substr(0, count);
	};
}

void TestExtensionMethod()
{
	string s = "hello world";
	string s2 = s | Left(5);	// "hello"
	cout << s << s2;

	// oops - operator precedence fails - parens needed
	cout << (s | Left(5));
}

It’s testimony to the power of C++, but it quickly breaks down in real-world scenarios.  For example, operator precedence is not natural.  Also, we’re really back where we started.  The IDE doesn’t know we’re invoking a function on the right side of the operator, so it’s silent as the grave.  And without IDE support, there’s no gain in productivity.

The Request

In the end, there doesn’t appear to be a way to achieve extension methods with the language as it stands.  Therefore, I would propose that C++ be extended to permit an alternate calling syntax for any static free function, effectively eliding the first argument and passing it implicitly as the ‘this’ pointer.  The behavior should work closely with ADL rules to determine when and where scoping is necessary.  Of course, this feature would introduce ambiguities which the programmer would need to resolve.  An example that springs to mind immediately would be iterator begin and end methods.  Binding preference should be given to “regular” instance methods, over extension methods.  Unlike C#, I would want to permit calling an extension method from inside another method, again assuming all ambiguities are resolved.  This feature should also fully embrace template support.  Imagine generating families of helper methods on an arbitrary type (either primitive or UDT – it makes no difference).  Now we have something approximating a “mixin”.  Imagine combining that behavior in turn with template-based systems for parsing, serialization, etc.  This would be a powerful feature indeed.

P.S.  After writing this, I stumbled over the D language’s Uniform Function Call Syntax feature.  I think it fits the bill rather nicely!

Template Template Parameter Overloading

As the ISO C++ committee deliberates on what to include in the next iteration of the language, C++14, I’d submit the following for their consideration…

Often, the need arises to decorate one type with another (typically through derivation or some variation on CRTP). Template Template Parameters are a powerful tool for implementing this, as Alexandrescu demonstrates with his scattered and linear hierarchy generation (Modern C++ Design). The problem is that currently, C++ demands a uniform interface for such parameters – they must have identical parameter lists, despite the fact that “regular” templates and run-time functions offer support for both overloading on parameter lists and for defaulting parameter values.  

There does not appear to be a sound reason for this limitation, as all ambiguity would necessarily be resolved at compile time. This is particularly onerous when using third-party templates which make heavy use of parameter defaulting themselves (e.g., collections) as template parameters.  The defaulting nature is not transparently supported by the referencing template.  One possible implementation might be to use variadic templates.  Another would be to use template typedefs (aliases) to “hide” the optional parameters.  These feel more like workarounds.  My preference would be for the language to simply support this capability natively.  C++ may never achieve the homoiconicity of functional languages, with the unification of compile-time and run-time features, but it can take steps to remove arbitrary distinctions.

Code Progressions

1085844_sax_close-upI love jazz.  Especially when a musician improvises over the changes – the “changes” being chord progressions in a given section of a song.  One of my favorites is Coltrane’s Giant Steps.  I’m in awe when a soloist can float above complex chord changes without getting lost.  The key, I’m sure, is having the progression memorized.

Here’s an analogy: I’ve found it helpful to memorize several progressions in my programming. This helps ensure that my code is safe, efficient, and loosely coupled, without conscious effort – like a musician soloing over the changes. The following code progressions might be helpful for a programmer new to the art, and seeking to develop an idiomatic approach.  Each sequence begins with safest, most restrictive option, progressing to greater complexity and capability as necessary.

Member Accessibility

  • Private
  • Protected
  • Public

This is the progression every object-oriented programmer begins with. Keeping the surface area of a class as small as possible reduces complexity, test case coverage, and cognitive load for consumers of the class (i.e., it does your peers a favor).

Member Declaration

  • Module Static
  • Class Static Inline
  • Class Static Out-of-line
  • Const Non-Virtual Inline
  • Const Non-Virtual Out-of-line
  • Modifiable Non-Virtual Inline
  • Modifiable Non-Virtual Out-of-line
  • Const Virtual
  • Modifiable Virtual

Each degree in this progression increases cost or complexity, or reduces generality. This one is a little meatier, because it combines several concerns, all related to member declaration.  But we can break it down into “atomic” sub-progressions, each of which factors into the overall progression:

  • Module → Class
  • Static → Member
  • Const → Modifiable
  • Non-Virtual → Virtual
  • Inline → Out-of-line

Private Static might be useful with pure header implementations, or when it’s hard to identify a source code module for a member to live in. But generally, Module Static is preferable to Class Static because it introduces zero coupling, not being mentioned in the public interface. One pattern for implementing Module Static is to forward declare an opaque helper class and wrap it with a smart pointer.

Const methods have a broader domain than Modifiable methods, given the compiler’s ability to bind a const reference to a temporary object, and so should be preferred.

Inline methods can be handled more efficiently by the compiler, and should be preferred over out-of-line methods, all things being equal. For example, simple get/set accessors which add no coupling should always be implemented inline. In this case, the narrower context also aids readability.

Non-Virtual methods require the additional pushing/en-registering of the ‘this’ pointer when dispatching the call, and should only be used when required.

Finally, Virtual methods require a level of indirection (pointer table lookup) when dispatching the call, and should be used only as necessary.

Data Storage

  • Stack Variable
  • Function Parameter
  • Class Member (Aggregation)
  • Heap

The scope of a variable’s existence and visibility should be kept as narrow as possible. This doesn’t just aid readability. For every degree of scope broadening, there is increased cost in managing memory or communicating the data between parts of the program. Whenever possible, a developer should strive for a “stack-based discipline.”  This might entail rewriting an algorithm to be recursive, or simply relocating a variable to the topmost stack frame in which it must be visible. The heap should be considered a means of last resort. It certainly has its place, such as a factory method returning a collection, but it’s relatively slow and adds complexity and risk.

Code Reuse

  • Aggregation (or Private Inheritance)
  • Single Inheritance
  • Single Concrete & Multiple Interface
  • Multiple Concrete

Private Inheritance might be useful if a developer wants to expose a small subset of an inherited interface publicly, in which case a few “using” declarations can come in handy. Otherwise, Aggregation is typically preferred, as it reduces complexity in constructor implementations.

Some languages enforce single concrete inheritance, permitting implementation of multiple abstract interfaces. This design eliminates the possibility of the “dreaded diamond hierarchy”, with multiple instances of the same base. Other languages permit multiple concrete inheritance, and must address multiple bases (typically through virtual inheritance).

Do you have a progression you’d like to add?  Let’s hear it!

Structure Aversion

fractal-69181_640

A well-designed system is like a fractal image – it’s highly compositional. One can see finer degrees of structure at any level of detail.

My Idiosyncratic C++ series would not be complete without a discussion of structure. 

Good object-oriented (OO) programmers understand the basic principles of OO analysis.  And many make a respectable effort in decomposing a system into independent, reusable, composable building blocks.  Unfortunately, many such efforts end there, resulting in course-grained, poorly-structured designs.  In this post, I’ll explore what a well-structured design does, and does not, look like.

C++ has been described as two languages in one: a low-level language of constructors, destructors, conversion and assignment operators, and the like; and a high-level domain-specific language undergirded by the first. But highly structured code contains N “languages”, and is like a fractal image.  It could also be compared to an objective hierarchy, wherein the levels below answer the question “how?”, and the levels above, the question “what?” or “why?”  A good programmer is adept at shifting levels as necessary, and works hard not to conflate them.

Information Content

Well-factored code is synonymous with highly structured code.  You cannot have one without the other.  An information theorist might say that highly structured code contains more information.  This is true in two senses.  

First, structured code has greater “information density”.  Here, we’re not creating information so much as we are distilling and concentrating it.  The same signal remains, but there is far less noise.  And since every line of code is a liability, the fewer it takes to express an idea, the better.  (Fewer to write, compile, debug, review, test, document, ship, maintain, …)

Second, creating more structure implies defining more abstractions, which is a powerful programming exercise.  Articulating the utility and purpose of every artifact, and formalizing the relationships between them, makes the programmer’s intent as explicit as possible.  Taking the knowledge latent inside the gray matter, and “injecting” it into the system creates new information.  And as an information theorist might also say, there is “no information without physical representation.”  (OK, it still exists somewhere in the programmer’s head, but it sure isn’t very accessible.)

What’s In a Name?

As new degrees of structure are created, it’s probably fairly apparent that new names are needed (lambdas notwithstanding).  A good test for one’s own understanding of a system is the ability to name each part of it.  I have found that when I labor over the right name for an artifact – be it a namespace, class, method or whatever – it may be a symptom that my design is a bit muddled.  Most likely, it needs more fine-grained structure, permitting each piece to abide by the single responsibility principle.  Taming the naming beast is a good thing.  If I can create the best name for an artifact right now, I’m doing my peers and my future self a favor.  Conversely, if I can’t understand the system today (well enough to name it), how can I hope to do so when I have to maintain or extend it a year from now?

Structure Aversion

That brings us to our idiosyncratic practice – what I’d simply call “structure aversion.”  It’s manifested in many ways, which I’ll describe, but first I want to get down to the (de)motivation for it.  I think its most common among beginners, perhaps driven by a distaste for the “overhead” needed to define a new structure.  Some developers seem to think that every class warrants dedicated header and source files, complete field encapsulation, serializability, a host of virtual methods implementing various interfaces, etc, etc.  (The corollary of this is that having many local structures should be prohibited.)  This may be applicable for some singly-rooted hierarchy languages, especially managed ones.  But C++ inherits C, which embraces lightweight, native structs.  And these are ideal for POD types, tuples, parameter objects, compound returns, local aggregate arrays, and so on.

Let’s look at some examples of structure aversion, similar to what I recently encountered during a refactoring exercise.  The first is a simple case of structuring a function’s parameters and returns, which can lead to a more compact, fluent, functional style.

// Factory: the return value is assigned to an arbitrary bit of
// person data (SSN) - the "lucky winner!"  Also, it's difficult 
// to tell at a glance which params are "in" and which are "out".
int GetPerson(const string & first, const string & last, int & dob);

// Note the asymmetry of GetPerson and OutputPerson, due to the SSN
void OutputPerson(const string & first, const string & last, int dob, int ssn);

void ProcessPerson(const string & first, const string & last)
{
	// Lots of temporaries are needed with structure aversion.
	// Note how they're "viral" - the dob out param temporary
	// necessitates the ssn temporary, which otherwise could 
	// have been passed along inline to OutputPerson.
	int dob;
	int ssn = GetPerson(first, last, dob);
	OutputPerson(first, last, dob, ssn);
}

// ... after refactoring ...

// A few POD structs, local or private to a class or module.
// Note the separation of key fields into a base struct, which
// is subclassed for the non-key fields, providing a tidy way
// to partition a struct based on "in" versus "in/out" usage.
struct PersonName
{
	string First;
	string Last;
};
struct Person : PersonName
{
	int SSN;
	int DOB;
};

Person GetPerson(const PersonName & name);

void OutputPerson(const Person & person);

void ProcessPerson(const string & first, const string & last)
{
	// The functional style may not be to everyone's taste,
	// but it shows the potential for compactness that 
	// is otherwise impossible with structure aversion.
	OutputPerson(GetPerson(PersonName(first, last)));
}

Structure aversion is “compositional” in a way.  If the pre-factored code above looks bad, it looks worse when we have to deal with collections.  In this case, structure aversion leads to the use of parallel collections of primitives, versus a single collection of structures. This follows naturally from the example above. After all, without a Person abstraction, how would one transmit the details of several people? One might call this “column-centric” versus “row-centric” data access.  And while it may make sense for big data platforms like Google Big Query, you have to learn to walk before you can fly (or, learn the rules before you break them).  Here’s what I found in my refactoring exercise:

// Without a Person abstraction, people must be collected partwise.
int GetPeople(vector<string> & firstNames, vector<string> & lastNames, vector<int> & dobs, vector<int> & ssns);

// Parallel collections are passed in as out params and 
// processed in parallel.  Lots of opportunity for bugs.
void ProcessPeople()
{
	vector<string> firstNames;
	vector<string> lastNames;
	vector<int> dobs;
	vector<int> ssns;
	auto numPeople = GetPeople(firstNames, lastNames, dobs, ssns);
	for( int i = 0; i < numPeople; i++ )
	{
		OutputPerson(firstNames[i], lastNames[i], dobs[i], ssns[i]);
	}
}

// ... after refactoring ...

// A much simpler factory
vector<Person> GetPeople();

// And its use
void ProcessPeople()
{
	auto people = GetPeople();
	for_each(begin(people), end(people), [&](Person & person){OutputPerson(person);});
}

In the examples above, I’ve simplified things greatly.  The original code had many more fields (and consequently, collections), and was using MFC collections, which made the parallel iteration even messier.

Factoring Data

In addition to using structures to aggregate runtime data, as above, it can also be used for compile-time constant data.  With this technique, aggregate initialization is used to create a static array of POD structs.  This approach can be used in a number of common scenarios, such as:

  • Render passes
  • Token parsers
  • UI control layout

Let’s take the last one as an example – UI control layout. This is a bit contrived, as these days I’d use a layout engine for new UIs. On the other hand, I find myself refactoring legacy code like this quite often, so I mention it.

enum Color
{
	Red, Green, Blue, Gray
};

struct Button
{
	Button(const string & caption, int color, bool isDefault = false);
};

struct Dialog
{
	void AddControl( Button * button, int left, int top, int width, int height );

	// This kind of structureless layout code is common, 
	// especially when machine-generated.  
	void Layout()
	{
		auto ok = new Button("OK", Red, true);
		AddControl( ok, 10, 400, 80, 20 );
		auto cancel = new Button("Cancel", Green);
		AddControl( cancel, 100, 400, 80, 20 );
		auto apply = new Button("Apply", Blue);
		AddControl( apply, 190, 400, 80, 20 );
		auto help = new Button("Help", Gray);
		AddControl( help, 280, 400, 80, 20 );
	}
};

// ... after refactoring ...

void Dialog::Layout()
{
	// A local POD struct to collect all constant button data
	struct ButtonData
	{
		const char * name;
		Color color;
		int left;
		// we can place "optional" (zero-defaulted) fields at 
		// the end of the struct, relying on the behavior of 
		// aggregate initialization for typical cases.
		bool isDefault;	
	} 
	buttons[] = 
	{
		{"OK", Red, 10, true},
		{"Cancel", Green, 100},
		{"Apply", Blue, 190},
		{"Help", Gray, 280},
	};
	// Now, we loop over the data defining each button
	// The code is slowly morphing from imperative to declarative.
	// Perhaps more importantly, we've "created new information",
	// adding structure to establish the cohesiveness of each
	// pair of statements in the original version.
	for each (auto& buttonData in buttons)
	{
		auto b = new Button(buttonData.name, buttonData.color, buttonData.isDefault);
		AddControl( b, buttonData.left, 400, 80, 20 );
	}
}

// The refactoring above is an improvement, but all the references to
// buttonData are a bit of a code smell (Inappropriate Intimacy).
// ... so, after refactoring again ...

void Dialog::Layout()
{
	struct ButtonData
	{
		const char * name;
		Color color;
		int left;
		bool isDefault;
		// The appeal to this technique - we can drop little
		// bits of logic into the POD to encapsulate and clean
		// up call sites even more.
		void Create(Dialog & dialog)
		{
			auto b = new Button(name, color, isDefault);
			dialog.AddControl(left, 400, 80, 20);
		}
	} 
	buttons[] = 
	{
		{"OK", Red, 10, true},
		{"Cancel", Green, 100},
		{"Apply", Blue, 190},
		{"Help", Gray, 280},
	};
	// Now, we've got a much more declarative approach
	for each (auto& buttonData in buttons)
	{
		buttonData.Create(*this);
	}
}

Chunky Functions

It’s important to point out that fine-grained structure should exist not just with types, but with functions too.  Simple one-line inline methods may seem gratuitous at first, but they can enhance readability and maintainability of code tremendously.  I recently refactored some XML-generation code that had a very low signal to noise ratio (probably less than 1), due to all the Xerces clutter.  I spent an hour writing a handful of methods for creating elements and attributes, in a literate style (permitting method-chaining).  The number of lines of client code was cut in half and the readability doubled, at least.  This kind of structuring is also helpful for conversions, casts, factories, and many other small sequences of oft-repeated statements and expressions.  It’s also helpful for “object-oriented currying” (as I call methods implemented in terms of overloads).

References

In closing, I’ll point out that this is all well-trod ground, but it bears repeating.  I’ve never regretted jumping into a ruthless refactoring leading to greater structure.  There may be moments when I wonder how long it will take to get the engine block back into the car, but the end always justifies it. Adding new information makes the code more self-describing and operating at a higher level of abstraction makes it more declarative.  Both of these contribute to making the code clearer.

For further research, I recommend Martin Fowler’s Refactoring: Improving the Design of Existing Code (a bit dated, but still relevant), as well as Andrew Binstock’s Dr. Dobbs article on fine-grained structures.  The challenge of constraining oneself to 50-line classes is a great way to enforce a discipline that is sure to grow on you. In the meantime, I encourage you to create structure at every opportunity.

Generic Lambdas

GenericLambdasWhen C++11 came to a compiler near me, one of the first features I was eager to try was lambdas.  As a fan of highly-structured code, I had often factored out “local functions” in my designs.  Because nested functions are prohibited in C++ (why?), I’d often rely on the technique of declaring a nested struct with a public static (stateless) method.  This achieves the same effect as a local function with a minimum of fuss.  Some libraries (e.g., Boost) provide macros to fabricate local functions, hiding the “ugliness.”  But it’s not that bad, and the struct name can be useful in providing some self-documenting organization.  Also, I tend to prefer keeping non-language subterfuge to a minimum.  It creates a clutter all its own.  But I digress – I was talking about lambdas.

I’ve used lambdas in C# since their inception as part of LINQ (a technological tour de force, by the way).  They permit a “locality of reference” for the programmer, permitting anonymous function object arguments to be implemented at the point of their use.  This reduces both a great deal of boilerplate code, and cognitive load on the programmer.

A Degrading Experience

So, when lambdas debuted in Visual C++, I eagerly put them to the first use I could find, and was… disappointed.  I had in mind to replace some of my nested struct method uses with lambdas, specifically for calling Windows APIs that accept a C callback function.  My scenario was thus aimed squarely at the “captureless lambda degrades to a function pointer” feature of lambdas. I can smile about it now, since this behavior is now available in Visual Studio 2012 .  I mean, how cool is this?

EnumWindows([](HWND, LPARAM){return TRUE;}, 0L);

I’ve since used lambdas in countless contexts and still regard this as my favorite C++11 feature (second only to “auto“).  However, it wasn’t long until I discovered several more basic limitations of lambdas.  Another was lack of support for contravariance.  Now, I understand that C/C++ has never had this, so maybe just seeing it again in the new lambda light has made me wistful.  But I’d really like to be able to do this:

struct ParamBase{};

struct ParamSub : ParamBase{};

void Bar( void(ParamSub & p) );

void CallBar()
{
	Bar([](ParamBase & p)
	{
		// I only care about ParamBase in my lambda,
		// so why can't I use contravariance?
	});
}

Deduction for the Win

Most of the limitations I’ve encountered, however, could be summarized as a lack of generic (polymorphic) support.  In short, there are many situations in which type deduction would make lambdas more compact and readable.  I’m excited that this is getting some attention.  After co-authoring a proposal, Faisal Vali has created a test implementation and made it available here.  I ran across this on the official C++ website, which is calling for comment on Faisal’s work.  As should be expected, the approach is thorough and thoughtful, and I’ll have a hard time waiting for the standards committee to do their thing.  Still, assuming the proposal is adopted essentially without change, C++ haters are gonna have a field day with this syntax:

[]<>(){}

And it won’t be the Lispers.  They’ll love all those delimiters, bringing C++ asymptotically closer to FP enlightenment.  But pray, what will we do when we need the next pair?

Naked Aggression with the Heap

MP900321198I’ll begin my Idiosyncratic C++ series with a discussion of the heap, its abuses, and its alternatives.

The Heap

The heap does not seem to be well understood among many C++ programmers.   I’m referring to the trade-offs involved with using it, not its underlying implementation.  Heap allocation should be a means of last resort, a necessary evil for getting the job done.   As a general pool of memory available for program use, it is slower and riskier to use than the stack.  And yet, many beginning C++ programmers are enamored with it.  Perhaps it’s a misappropriation of the ‘new’ keyword from managed languages like Java and C# – a conflation of the concepts of object creation and heap allocation.  In any case, it’s often the first tool the programmer reaches for in allocating memory.  And when it is deemed necessary, it is often used both “nakedly” and “aggressively”, leading to unnecessary risks for program safety.

The Stack

By contrast, the stack is the original, automatic, hardware-accelerated memory manager.  It’s impossible to leak stack memory, and difficult to double-delete it (or single-delete it, for that matter).  [The original purpose of the “auto” keyword in C/C++ was to designate an “automatic storage” variable with local block scope, in contrast to the “static” keyword.  But since the language provided “auto” as a default, the keyword found little use and has been re-purposed for type deduction.]  At its foundation, the computer is a stack-based machine.  Functional programmers know this and design their algorithms around it.  But it’s just as true for the object-oriented programmer.  Cultivating a “stack-based discipline” can vastly simplify designs, eliminate whole classes of bugs (heap-based), and improve efficiency and readability.

Aggressive

Many heap allocations are simply not necessary.  Examples include:

  • Locals
    Local variables newed and deleted entirely within a given block scope.   This must certainly be a holdover from managed languages.
  • Members
    Aggregate classes newing members in a constructor and delete-ing them in a destructor.  Sometimes this is necessary if a member does not have a public constructor (s obtained from a factory), is implementing the pimpl pattern, and so on.  However, I have seen many cases where member data is created on the heap with evidently no conscious thought.
  • Collections
    The (mistaken) belief that every collection must be implemented with a heap-based library template.  The corollary to this is that aggregate arrays are evil.  This is probably a misapplication of the good advice not to allocate raw arrays from the heap, preferring standard collections.  But this assumes that dynamic allocation is even needed in the first place.

Naked

Many heap allocations are made directly from client code.  Calls to new/malloc and delete/free are manually paired, and may be arbitrarily distant in code.  The risk of mismanaging these allocations is high.  Leaks can result from exceptions, or multiple function returns, or simply overlooking a terminal branch.  Double-deletes may result from pointer aliasing, or not having clear responsibility for ownership.

Avoiding the Heap

Now that we’ve identified a few risks of heap use, let’s discuss ways to avoid it.

  • Locals
    For local variables, the cure is simple – allocate the object directly on the stack, rather than through a pointer.
  • Members
    The same approach should be used for member data.  This does not guarantee the member will live on the stack.  Instead, it ties the allocation policy of the member to its containing class, so that it may live on the stack or on the heap.
  • Collections
    When dealing with simple, static collections, use an aggregate-initialized array.  There are some restrictions to this technique, when dealing with user-defined types.  However, these can usually be worked around, and should improve as the rules for POD types are revised and loosened.  There is virtually no advantage to dynamically allocating a collection of static (compile-time constant) objects, when a fixed array will do.
  • Algorithms
    Some heap allocations may indicate a flawed algorithm.  An example would be the creation or collection of data at a deeper level in the stack than where it is consumed.  In this case, the data must clearly be placed on the heap so it can “swim upstream”, against the call stack current.  The solution is to invert the pattern and use a more “functional” approach, passing  data down the call stack to where it is needed.  This can be generalized to multiple data flows.
  • Recursion
    Similar to the preceding is to use a recursive approach, over a loop-based approach.  This can be especially helpful in avoiding the creation of short-lived heap-based collections, where an element may depend on the computation of prior elements (e.g., Fibonacci).  A map-reduce pattern can then move the reducing logic to the terminal condition of the recursion.
  • alloca
    Finally, consider that “dynamic allocation” != “heap allocation”.  The CRT function alloca enables fast, safe, dynamic allocation on the stack.  This can be very effective in performance critical situations where many small objects are frequently allocated and freed, such as with 3D mesh processing algorithms.  [Note: there are some challenges to the effective use of alloca and its ilk, malloca and freea.  The scoping requirements are tricky, foiling attempts to wrap calls in an RAII pattern.  But the utility of the function outweighs these concerns.]

Holding the Heap at Arm’s Length

In the event the heap cannot be avoided, it should be “treated with modesty”.  Direct calls to new/delete should be avoided if possible, or their exposure minimized.  Smart pointers should be used systematically and universally, to ensure exception-safe unwind behavior.  Locals and members which require heap allocation (factory-based, pimpl) should be wrapped in smart pointers.

  • For shared (aliased) reference-counted smart pointers, std::shared_ptr  should be used.  The std::make_shared template function should be used to allocate std::shared_ptr objects safely and efficiently.
  • For simple, unshared “singleton” smart pointers, either as class members or local variables, std::unique_ptr should be used to manage heap calls in an exception-safe manner.  As with std::make_shared, std::make_unique may be offered in the future for creating std::unique_ptr objects safely.
  • [Note that with C++11, legacy smart pointer templates, such as auto_ptr, are now considered obsolete.]

In summary, my advice is to use the heap as sparingly and indirectly as possible.  Your peers, testers, and customers will appreciate it.

 

Idiosyncratic C++

Happy St. Patrick’s Day!

Years ago I was in Florida courting a prospective customer.  In one of our initial biz-dev meetings, we were trying to get our arms around the opportunity, and were a bit paralyzed.  Finally, Jack, the customer’s colorful VP of Sales, said “what we need is a dead cat!”.  My partner and I looked at each other blankly.  Jack explained, “A dead cat is a brainstorming technique we use.  Someone puts a lousy idea on the table to get started.  Everyone takes a look at it and says, ‘We don’t know what we want, but we know it’s NOT THAT!’ ”

Sometimes, a good way to understand something is to identify what it is not.  With that technique in mind, I’m going to start a series of posts on idiomatic C++.  In each post, I’ll start with an anti-pattern or two of what Andrei Alexandrescu calls “C+”.  Rather than being idiomatic C++, we might call this style “idiosyncratic C++” (“idiotic C++” is probably too strong).  Many arrive at idiomatic C++ by way of idiosyncratic C++, particularly if they’ve come from other languages like C or Java.  They’ve learned the hard way how to write safe, efficient, maintainable, platform-independent C++ in a conventional style – in short, idiomatically.  Those trench experiences are gold <insert tenuous tie-in with St. Patrick’s Day pot-of-gold image>, and worth exploring and sharing.

My working definition of idiosyncratic C++ is:

Idiosyncratic C++:  Code that a compiler will accept, but a reviewer should not.

In this series, I’ll cover topics such as:

  • Using The Heap
  • Structured Code
  • Eager Evaluation
  • Naming Conventions

I’m open to requests and feedback.  Send them in and stay tuned…

 

 

A Place for Override, and Every Override in its Place

“I love C++!”, said an enthusiastic job candidate during an interview a year ago.  That candidate has since become one of the best developers I ever hired.  I too love C++, and especially so during its current renaissance.  Prior to C++11, the last time I felt this way was at the 1997 Visual C++ Developers Conference in Orlando.  Here, C++98 improvements like explicit and mutable were unveiled, causing many a developer to run back to their ARM to review “const correctness”, which had more or less been ignored to that point.  If C++98 was evolutionary, C++11 is revolutionary – a tour de force of new features.  Some, like lambdas, are bold forays into the archrival territory of functional programming.  The most profound new feature, from my perspective, is Rvalue References.  These enable move semantics (construction and assignment), which are potentially far more efficient than their copy counterparts.  For a language that lacks access to the compiler’s parse tree, features like Rvalue References permit surprisingly subtle expressions for optimal code generation.

So the C++ language is alive and well, and its priesthood is openly solicitous of new ideas from the community.  I’ll oblige with a series of posts on features I’d like to see added to the language.  Each of these has been born of necessity, when in the throes of a coding session I lamented that I had to once again resort to workarounds.

To kick things off, let’s consider override.   This new keyword (strictly, an “identifier”, but I’ll use “keyword” for simplicity and clarity) allows the programmer to make a distinction between an override of an existing virtual method, and the introduction of a new virtual method.  Previously, the keyword virtual was overloaded for this purpose and often led to errors when virtual dispatching would fail due to a refactoring of base or derived classes.  Let’s see how this could happen.

Initially the code looks like this:

struct Base
{
	virtual void foo()
	{
		cout << "Base::foo called" << endl;
	}
};

struct RiskyDerived : public Base
{
	// "virtual" is optional, making overrides difficult to track down
	void foo()
	{
		cout << "RiskyDerived::foo called" << endl;
	}
};

RiskyDerived derived;
Base & base = derived;
base.foo();	// "RiskyDerived::foo called"

After a hasty refactor, we now have this:

struct Base
{
	virtual void foo(int = 0)
	{
		cout << "Base::foo called" << endl;
	}
};

RiskyDerived derived;
Base & base = derived;
base.foo(); // "Base::foo called"

The binding of BrokenDerived::foo to the vtable entry of Base::foo is now broken, but the code still compiles, resulting in a runtime error (the worst kind).  But with the override keyword in place, we are protected from situations like this because the compiler will issue an error:

struct BetterDerived : public Base
{
	// Error: member function declared with 'override' does not override a base class member
	void foo() override;
};

The problem with override is that it’s only a half solution.  My development team has been bit by the situation above on a number of occasions.  So naturally, when a new keyword like override is introduced, we eagerly embrace it and look for a way to systematically employ it throughout the codebase.  That’s where override breaks down.  In its current implementation, systematic use is not feasible, because the keyword is optional.

What we have today is this behavior:

All uses of the override keyword must in fact override a virtual method. 

What we’re lacking is a way to enforce the contrapositive:

If a virtual method lacks the override keyword, it must not override a virtual method. 

We might term the current behavior “weak” or “permissive”, and my ideal behavior “strong” or “strict”.  In “strict” mode, without the override keyword, a method declaration must either introduce:

  1. A new virtual method (if accompanied by the virtual keyword) or
  2. A new non-virtual method, possibly hiding or overloading another method of the same name.

In short, all virtual method overrides must use the override keyword in “strict” mode.

I understand that a “strict override” would be a breaking change to much existing code.  But that is the whole point:  I would like the compiler to tell me where I need to use override (or not).  Because the virtual keyword is optional when declaring an overridden virtual method, it is a tedious and error prone exercise to manually track down all appropriate locations for override in an existing codebase.   It’s true that some compilers will warn of method hiding, if the signature alone changes.  But if the name of the method changes, the compiler is silent as the grave.

In “strict” mode, on the other hand, our hypothetical compiler could issue a warning such as this:

struct BestDerived : public Base
{
	// Warning: member function overrides a base class member, but is not declared with 'override'
	void foo();
};

Perhaps this is best left as a compiler extension, due to the concern over breaking changes.  But it surprises me that no compiler, to my knowledge, currently offers it.  At a minimum, as a user of Microsoft’s compiler, it seems to me that a third-party tool could be created that scans the Intellisense database to determine “strict override” violations.  Any takers?  Visual Assist team?