GameArchitect.net: Musings On Game Engine Structure and Design

 

Home

What It's All About

Who Is This Guy?

The List

Complete Archive

Personal 

RSS Feed

People




 

 

Nothing's Ever Easy: Evolution Of A Flag Set Class

By Kyle Wilson
Tuesday, June 11, 2002

Plasma, the engine I worked on at Cyan, was the most flexible code I've ever worked with.  That meant that every class had a wide range of options.  When I got there, UInt32s were in common use as flag vectors, and there were lots of flags in lots of different classes.  The coding style was that flags would be specified as enums, looking something like

enum
{
     kPropAnimated = 0x1, 
     kPropPhysical = 0x2,
     kPropAutoStart  = 0x8,
     kPropGrabable  = 0x10,
     [etc...] 
};

and they got set using standard bitwise Boolean operations.  So something might be initialized like

m_movementFlags = kPropAnimated | kPropAutoStart;

It drove me crazy.  Some classes had four or five different flag fields, and there was no type-checking at all to keep you from setting, for example, kPropAutoStart in m_shaderFlags instead of m_movementFlags, whUN Building Flagsere it would be treated as a different value entirely and create unforeseen side-effects.  This didn't happen often, but I caught myself setting the wrong flags this way more than once.  I never got around to changing the scheme, though, because it was woven so completely throughout the Plasma engine.

So now I'm at iROCK.  When I started, I don't think any class in RFEngine, the code base we're working on here, had more than a couple of bools in it to configure options.  Some of the work that I've done since arrival has called for a little more flexibility, though, so I thought I'd see if I couldn't do flags any better.

First Implementation

My first attempt looked something like this, though this is my version, not iROCK's, and conforms to my coding and naming conventions rather than my current employer's:

template <class EnumT>
class CoreFlagSet
{
     public:
          CoreFlagSet() : m_flagSet(0)       { }
          CoreFlagSet(EnumT val) : m_flagSet(0x1 << val)  { }
  
          CoreFlagSet& Raise(EnumT val)
                    { assert(val < 32); m_flagSet |= (0x1 << val); return *this; }
          CoreFlagSet& Lower(EnumT val)
                    { assert(val < 32); m_flagSet &= ~(0x1 << val); return *this; }
          bool Test(EnumT val) const
                    { assert(val < 32); return (m_flagSet & (0x1 << val)) != 0; }
  
          CoreFlagSet& Merge(CoreFlagSet& rhs)
                    { m_flagSet &= rhs.m_flagSet; return *this; }
          bool Any() const         { return m_flagSet != 0; }
          bool None() const         { return m_flagSet == 0; }
   
     private:
          UInt32 m_flagSet;
};

To use it, I'd define

enum MoveProps
{
     kPropAnimated = 1, 
     kPropPhysical,
     kPropAutoStart,
     kPropGrabable,
     /* etc... */ 
};

And declare my flag variable to be of type CoreFlagSet<MoveProps>.

There are a couple of improvements I'd like to make to this, and some other additions about which I'm still undecided.

Second Implementation

My first improvement, alas, proves unfeasible.  I'd like to better encapsulate the flags and the flag set by using Jim Coplien's "Curiously Recurring Template Pattern".  (Not available on the web, but you can read his paper in C++ Gems.)  That is, I'd like to make CoreFlagSet parameterized on some derived type, like this

template <class DerivT>
class CoreFlagSet
{
     typedef typename DerivT::FlagEnum EnumT;

     public:
          CoreFlagSet() : m_flagSet(0)       { }
          /* etc... the rest of CoreFlagSet is unchanged */ 
};

so that you'd declare a new flag set with the syntax

class MovePropFlagSet : public CoreFlagSet<MovePropFlagSet>
{
     enum MoveProps
     {
          kPropAnimated = 1, 
          /* etc... */ 
     };
};

Unfortunately, I achieve type-safety by making Raise and Lower take enums instead of integers as parameters.  That means that the template class needs to know about the derived class at class instantiation time, not function instantiation time.  Were I just using the enum inside the bodies of the Raise and Lower functions, this pattern would work.  That's not an option, though, so it looks like I'll have to stick with my current slightly unwieldy usage.

Third Implementation

Although I'm checking it with asserts, the limitation to 32 bits of flags is a little confining.  I can escape this by giving up UInt32s and making the flag set template a wrapper around an STL bitset of the appropriate size.  I simply #include <bitset> and redefine my class as

template <class EnumT, UInt32 MaxFlag>
class CoreFlagSet
{
     public:
          CoreFlagSet()           { }
          CoreFlagSet(EnumT val)        { m_flagSet.set(val, true); }
               
          CoreFlagSet& Raise(EnumT val)  
                    { m_flagSet.set(val, true); return *this; }
          CoreFlagSet& Lower(EnumT val)  
                    { m_flagSet.set(val, false); return *this; }
          bool Test(EnumT val) const  
                    { return m_flagSet.test(val); }
                
          CoreFlagSet& Merge(CoreFlagSet& rhs)  
                    { m_flagSet &= rhs.m_flagSet; return *this; }
          bool Any()           { return m_flagSet.any(); }
          bool None()           { return m_flagSet.none(); }
  
      private:
          std::bitset<MaxFlag> m_flagSet;
};

Voila!

Fourth Implementation

Even if a flag set uses fewer than 32 bits, byte-alignment requirements will almost always pad it out to 32 bits, so for a small number of flags, using a bitset probably won't save us space.  Unlike a UInt32, though, checking a flag now requires an addition, a division, and a mod, as well as the shift and bitwise and.  If flags are checked frequently in tight loops, this might become a significant slowdown.  I wonder, though, are bitsets really efficient enough for what we want to do? 

For my fourth implementation, then, I'd like to use my first implementation for flag sets with 32 flags or less and bitsets for everything else.  I do this with partial template specialization.  It should look something like

template <class EnumT, UInt32 MaxFlag, bool UseBitset>
class CoreFlagSetImpl
{
     /* Body of first implementation, substituting CoreFlagSetImpl for CoreFlagSet */
};

template <class EnumT, UInt32 MaxFlag, true>
class CoreFlagSetImpl
{
     /* Body of third implementation, substituting CoreFlagSetImpl for CoreFlagSet  */
};

template <class EnumT, UInt32 MaxFlag>
class CoreFlagSet : public CoreFlagSetImpl<EnumT, MaxFlag, (MaxFlag <= 32)>
{
     public:
          CoreFlagSet()           { }
          CoreFlagSet(EnumT val) : CoreFlagSetImpl<EnumT, MaxFlag, (MaxFlag > 32)>(val)  { }
};

The public inheritance of CoreFlagSetImpl leaves open the possibility that someone could delete a CoreFlagSet through a pointer to a CoreFlagSetImpl<EnumT, MaxFlag, UseUInt32>.  To protect against that, I could give the CoreFlagSetImpl specializations virtual destructors.  That would bloat the size of CoreFlagSet, though, to protect against an operation which, while possible, would be completely insane.  I'll go without a virtual destructor.  You can't protect against everything.

Unfortunately, I'm using MSVC 6.0, which doesn't support partial template specialization.

Fifth Implementation

Fortunately, there's a common workaround for compilers that don't support partial template specialization which uses nested classes instead.  With a quick shuffling and nesting, the class becomes

template <class EnumT, UInt32 MaxFlag>
struct CoreFlagSetWrapper
{
     template <bool UseBitset>
     class Impl
     {
          /* Body of first implementation, substituting Impl for CoreFlagSet */
      };

     template <>
     class Impl<true>
     {  
          /* Body of third implementation, substituting Impl<true> for CoreFlagSet */
     };
};

template <class EnumT, UInt32 MaxFlag>
class CoreFlagSet : public CoreFlagSetWrapper<EnumT, MaxFlag>::Impl<(MaxFlag > 32)>
{
     public:
          CoreFlagSet()           { }
          CoreFlagSet(EnumT val) : CoreFlagSetWrapper<EnumT, MaxFlag>::Impl<(MaxFlag > 32)>(val)  { }
};

This works wonderfully until I run it through the online Comeau C++ compiler to test its standard compliance, and get an error because I'm specializing a nested class in nested scope.  A little quick research uncovers the reason why, which is a vaguely-worded line in the C++ standard.

Sixth Implementation

The logical step for the sixth implementation would be to move the specialization of Impl out of CoreFlagSetWrapper and declare it in global scope with something like

template<class EnumT, UInt32 MaxFlag> template<>
class CoreFlagSetWrapper<EnumT, MaxFlag>::Impl<true>
{
     /* Body of third implementation, substituting Impl<true> for CoreFlagSet */
};

Unfortunately, this isn't legal C++ either.  Just in case there was any doubt, MSVC will tell you, "template definitions cannot nest."  Comeau has the slightly more helpful error message, "a template declaration containing a template parameter list may not be followed by an explicit specialization declaration."  It doesn't get much more definite than that.

Conclusion

So, for my little type-safe flags class, I'm left with one version (number four) that does what I want and is standard compliant, and one version (number five) that actually works on the platform I'm using.  Implementation number five is also supported by Borland and Metrowerks compilers, and so seems to be more widely accepted than the standard-compliant version!

Future enhancements are left as an exercise for the reader.  I'm ambivalent about adding bitwise operators to the flag set class.  Doing so would have made changing implementations easier at Cyan, and it exposes a commonly understood shorthand that saves a lot of typing.  But I think it encourages the wrong kind of thinking.  It gets programmers thinking they're dealing with bits instead of a flag abstraction, and before you know it they're bitwise OR-ing flag enums together with disastrous consequences.

The source for all the different flag set implementations is available here.

I'm Kyle Wilson.  I've worked in the game industry since I got out of grad school in 1997.  Any opinions expressed herein are in no way representative of those of my employers.


Home