Enum Filter

I have recently encountered code that is structurally similar to the following:

enum class number {
    zero,
    one,
    two,
    three,
    four,
    five,
    six,
    seven,
    end
};

…

if (value == number::two ||
    value == number::three ||
    value == number::five ||
    value == number::seven) {
    …
}

The manual comparisons do not look good to me, as it is repetitive, error-prone, and not expressing the intent. So the natural questions comes: How can we make the code ‘better’?

While this is a fake example, I hope you can see the point that enumerators have specific properties (which I will call ‘traits’ in this article, as per C++ traditions), and I want the code to show the intent as expressed by traits.

However, let us get rid of ‘value ==’ first. Any repetitions are bad, right?

My first take is something as follows:

template <typename T>
bool is_in(const T& value,
           std::initializer_list<T> value_list)
{
    for (const auto& item : value_list) {
        if (value == item) {
            return true;
        }
    }
    return false;
}

Very simply and straightforward, but not good enough. How can we generate the list, given some criteria?

If you are familiar with the concept of template metaprogramming, you know that this is a compile-time programming topic: compile-time filtering.

In order to filter on the enumerators, we need to describe them with traits. The following code could be good enough for our current purpose:

template <number n>
struct number_traits;

template <>
struct number_traits<number::zero> {
    constexpr bool is_prime = false;
}

template <>
struct number_traits<number::one> {
    constexpr bool is_prime = false;
}

template <>
struct number_traits<number::two> {
    constexpr bool is_prime = true;
}

template <>
struct number_traits<number::three> {
    constexpr bool is_prime = true;
}

template <>
struct number_traits<number::four> {
    constexpr bool is_prime = false;
}

template <>
struct number_traits<number::five> {
    constexpr bool is_prime = true;
}

template <>
struct number_traits<number::six> {
    constexpr bool is_prime = false;
}

template <>
struct number_traits<number::seven> {
    constexpr bool is_prime = true;
}

So, let us try figuring out a way to generate such a list.

After some study, you will know that initializer_list is not fit for such manipulations. tuple is a better utility. The main reason is that we had better manipulate types, instead of values, in template metaprogramming. An initializer_list is not capable of doing that, whereas C++ already has a facility to convert compile-time integral constants into types, its name being exactly integral_constant.

Its approximate definition is as follows, in case you are not familiar with it:

template<class T, T v>
struct integral_constant {
    static constexpr T value = v;
    using value_type = T;
    using type = integral_constant;
    constexpr operator value_type() const noexcept
    {
        return value;
    }
    constexpr value_type operator()() const noexcept
    {
        return value;
    }
};

Such a definition is already provided by the standard library. So, instead of having an initializer_list like {number::two, number::three, number::five}, we would have something like the following:

std::make_tuple(
    std::integral_constant<number, number::two>{},
    std::integral_constant<number, number::three>{},
    std::integral_constant<number, number::five>{})

It would be safe to pass such ‘arguments’ for compile-time programming, as only their types matter. We would not need their values, as each type has exactly one unique value.

The next questions are:

  1. How can we generate the constants for all possible enumerators?—I.e. compile-time iteration.
  2. How can we filter to get only the values we want?—I.e. compile-time filtering.
  3. How can we check whether a value is equal to one of the constanta we present?—I.e. (compile-time or run-time) checking like the is_in above.

The answer to the first question is that we need to generate a sequence, and we need to know what the last enumerator is. As far as I know, there is currently no way in C++ to enumerate all the enumerators of an enum type. I have to resort to an agreement to mark the end of a continuous enumeration, and my choice is that we use end to mark the end, as in the enum class listed in the very beginning of this article. That is, I need to generate the sequence from integral_constant<number, number{0}> to integral_constant<number, number::end>, exclusive.

This job can be easily done with the following code, using the standard tuple and index_sequence technique:

template <typename E, size_t... ints>
constexpr auto make_all_enum_consts_impl(
    std::index_sequence<int...>)
{
    return std::make_tuple(std::integral_constant<
        E, E(ints)>{}...);
}

template <typename E>
constexpr auto make_all_enum_consts()
{
    return make_all_enum_consts_impl<E>(
        std::make_index_sequence<size_t(E::end)>{});
}

Now we have come to the second, and the really difficult, part: how can we filter the values to get only those we need?

The answer is use apply, tuple_cat, and conditional_t, three important tools in the C++ template metaprogramming world:

  • With apply, we can call a function with all elements of a tuple as arguments. I.e. apply(f, make_tuple(42, "answer")) would be equivalent to f(42, "answer").
  • With tuple_cat, we can concatenate elements of tuples into a new tuple. I.e. tuple_cat(make_tuple(42, "answer"), make_tuple("of", "everything")) would result in the tuple {42, "answer", "of", "everything"}.
  • With conditional_t, we can get one of the given types based on a compile-time Boolean expression. I.e. conditional_t<true, int, string> would result in int, but conditional_t<false, int, string> would result in string.

Each tool may look trivial individually, but they can be combined together to work wonders. Specifically, it can do what we now need.

This is the final form I use (mainly inspired by this Stack Overflow answer):

#define ENUM_FILTER_FROM(E, T, tup)                     \
    std::apply(                                         \
        [](auto... ts) {                                \
            return std::tuple_cat(                      \
                std::conditional_t<                     \
                    E##_traits<decltype(ts)::value>::T, \
                    std::tuple<decltype(ts)>,           \
                    std::tuple<>>{}...);                \
        },                                              \
        tup)

Let me explain what it does:

  • The macro takes an enumeration type, a trait name, and a tuple of enumerator constants, which are created by make_all_enum_consts above. The reason why a tuple of constants are used is that the result of calling ENUM_FILTER_FROM can be filtered again.
  • std::apply invokes the generic lambda with the tuple of arguments
  • The generic lambda does the compile-time computation of concatenating (tuple_cat) the arguments into a new tuple
  • The arguments of tuple_cat is either a tuple of one enumerator constant, if the type satisfies the trait, or an empty tuple otherwise
  • So the end result of executing the code in the macro is a tuple of enumerator constants that satisfy the trait

The answer to the third question is relatively simple. For maximal flexibility, I am splitting it into two steps:

  • Convert the tuple of types into a tuple of values
  • Check whether a value is in the tuple with a fold expression

Here is the code:

template <typename Tuple, size_t... ints>
constexpr auto make_values_from_consts_impl(
    Tuple tup, std::index_sequence<ints...>)
{
    return std::make_tuple(std::get<ints>(tup)()...);
}

template <typename Tuple>
constexpr auto make_values_from_consts(Tuple tup)
{
    return make_values_from_consts_impl(
        tup, std::make_index_sequence<
                 std::tuple_size_v<Tuple>>{});
}

template <typename T, typename Tuple, size_t... ints>
constexpr bool is_in_impl(const T& value,
                          const Tuple& tup,
                          std::index_sequence<ints...>)
{
    return ((value == std::get<ints>(tup)) || ...);
}

template <typename T, typename Tuple>
constexpr std::enable_if_t<
    std::is_same_v<T, std::decay_t<decltype(std::get<0>(
                          std::declval<Tuple>()))>>,
    bool>
is_in(const T& value, const Tuple& tup)
{
    return is_in_impl(value, tup,
                      std::make_index_sequence<
                          std::tuple_size_v<Tuple>>{});
}

Finally, we can define the function is_prime:

constexpr bool is_prime(number n)
{
    return is_in(
        n, make_values_from_consts(ENUM_FILTER_FROM(
               number, is_prime,
               make_all_enum_consts<number>())));
}

More interestingly, the result of invoking ENUM_FILTER_FROM can be passed to ENUM_FILTER_FROM again. If we defined the trait is_even as well as is_prime, we would be able to write:

ENUM_FILTER_FROM(number, is_even, \
    ENUM_FILTER_FROM(number, is_prime, …)

Is that nice?

Do note that there is an asymmetry here. It is trivial to implement make_values_from_consts, but it seems impossible to implement its inverse constexpr function make_consts_from_values. This is because there are no constexpr arguments in C++ (check out this discussion if you are interested in the reasons). No arguments are regarded constexpr, even in a constexpr function. You can work around the problem in a cumbersome way, but for this post I am sticking to using types as long as possible.

That’s it, my experience of using compile-time filtering. I have found the techniques presented here useful, and I wish you would find them useful too.

Time Zones in Python

Python datetimes are naïve by default, in that they do not include time zone (or time offset) information. E.g. one might be surprised to find that (datetime.now() - datetime.utcnow()).total_seconds() is basically the local time offset (28800 in my case for UTC+08:00). I personally kind of expected a value near zero. This said, datetime is able to handle time zones, but the definitions of time zones are not included in the Python standard library. A third-party library is necessary for handling time zones. In our project, a developer introduced pytz in the beginning. It all looked well, until I found the following:

>>> from datetime import datetime
>>> from pytz import timezone
>>> timezone('Asia/Shanghai')
<DstTzInfo 'Asia/Shanghai' LMT+8:06:00 STD>
>>> (datetime(2017, 6, 1, tzinfo=timezone('Asia/Shanghai'))
...  - datetime(2017, 6, 1, tzinfo=timezone('UTC'))
... ).total_seconds()
-29160.0

Sh*t! Was pytz a joke? The time zone of Shanghai (or China) should be UTC+08:00, and I did not care a bit about its local mean time (I was, of course. expecting -28800 on the last line). What was the author thinking about? Besides, it did not provide a local time zone function, and we had to hardcode our time zone to 'Asia/Shanghai', which was ugly.—Disappointed, I searched for an alternative, and I found dateutil.tz. From then on, I routinely use code like the following:

from datetime import datetime
from dateutil.tz import tzlocal, tzutc
…
datetime.now(tzlocal())  # for local time
datetime.now(tzutc())    # for UTC time

When answering a StackOverflow question, I realized I misunderstood pytz. I still thought it had some bad design decisions; however, it would have been able to achieve everything I needed, if I had read its manual carefully (I cannot help remembering the famous acronym ‘RTFM’). It was explicitly mentioned in the manual that passing a pytz time zone to the datetime constructor (as I did above) ‘“does not work” with pytz for many timezones’. One has to use the pytz localize method or the standard astimezone method of datetime.

As tzlocal and tzutc from dateutil.tz fulfilled all my needs and were easy to use, I continued to use them. The fact that I got a few downvotes on StackOverflow certainly did not make me like pytz better.


When introducing apscheduler to our project, we noticed that it required that the time zone be provided by pytz—it ruled out the use of dateutil.tz. I wondered what was special about it. I also became aware of a Python package called tzlocal, which was able to provide a pytz time zone conforming to the local system settings. More searching and reading revealed facts that I had missed so far:

  • The Python datetime object does not store or handle daylight-saving status. Adding a timedelta to it does not alter its time zone information, and can result in an invalid local time (say, adding one day to the last day of daylight-saving time does not result in a datetime in standard time).
  • The time zone provided by dateutil.tz does not handle all corner cases. E.g. it does not know that Russia observed all-year daylight-saving time from 2012 to 2014, and it does not know that China observed daylight-saving time from 1986 to 1991.
  • The pytz localize and normalize methods can handle all these complexities, and this is partly the reason why pytz requires people to use its localize method instead of passing the time zone to datetime.

So pytz can actually do more, and correctly. I can do things like finding out in which years China observed daylight-saving time:

from datetime import datetime, timedelta
from pytz import timezone
china = timezone('Asia/Shanghai')
utc = timezone('UTC')
expect_diff = timedelta(hours=8)
for year in range(1980, 2000):
    dt = datetime(year, 6, 1)
    if utc.localize(dt) - china.localize(dt) != expect_diff:
        print(year)

It is now clear to me that the pytz-style time zone is necessary when apscheduler handles a past or future local time.


A few benchmarks regarding the related functions in ipython (not that they are very important):

from datetime import datetime
import dateutil.tz
import pytz
import tzlocal
dateutil_utc = dateutil.tz.tzutc()
dateutil_local = dateutil.tz.tzlocal()
pytz_utc = pytz.utc
pytz_local = tzlocal.get_localzone()
%timeit datetime.utcnow()
310 ns ± 0.405 ns per loop (mean ± std. dev. of 7 runs, 1000000 loops each)
%timeit datetime.now()
745 ns ± 1.65 ns per loop (mean ± std. dev. of 7 runs, 1000000 loops each)
%timeit datetime.now(dateutil_utc)
924 ns ± 0.907 ns per loop (mean ± std. dev. of 7 runs, 1000000 loops each)
%timeit datetime.now(pytz_utc)
2.28 µs ± 18.4 ns per loop (mean ± std. dev. of 7 runs, 100000 loops each)
%timeit datetime.now(dateutil_local)
17.4 µs ± 29.6 ns per loop (mean ± std. dev. of 7 runs, 100000 loops each)
%timeit datetime.now(pytz_local)
5.54 µs ± 11.8 ns per loop (mean ± std. dev. of 7 runs, 100000 loops each)

My final recommendations:

  • One should consider using naïve UTC everywhere, as they are easy and fast to work with.
  • The next best is using offset-aware UTC. Both dateutil.tz and pytz can be used in this case without any problems.
  • In all other cases, pytz (as well as tzlocal) is preferred, but one should beware of the peculiar behaviour of pytz time zones.

My Opinions Regarding the Top Five TIOBE Languages

I have written C++ for nearly 30 years. I had been advocating that it was the best language 🤣, until my love moved to Python a few years ago. I will still say C++ is a very powerful and unique language. It is probably the only language that intersects many different software layers. It lets programmers control the bit-level details, and it has the necessary mechanisms to allow programmers to make appropriate abstractions—arguably one of the best as it provides powerful generics, which are becoming better and better with the upcoming concepts and ranges in C++20. It has very decent optimizing compilers, and suitably written C++ code performs better than nearly all other languages. Therefore, C++ has been widely used in not only low-level stuff like drivers, but also libraries and applications, especially where performance is wanted, like scientific computing and games. It is still widely used in desktop applications, say, Microsoft Office and Adobe Photoshop. The power does come with a price: it is probably the most complicated computer language today. Mastering the language takes a long time (and with 30 years’ experience I dare not say I have mastered the language). Generic code also tends to take a long time to compile. Error messages can be overwhelming, especially to novices. I can go on and on, but it is better to stop here, with a note that the complexity and cost are sometimes worthwhile, in exchange for reduced latency and reduced power usage (from CPU and memory).

Python is, on the other hand, easy to learn. It is not a toy language, though: it is handy not only to novices, but also to software veterans like me. The change-and-run cycle is much shorter than C++. Code in Python is very readable, partly because lists, sets, and dictionaries are supported literal types (you cannot write in C++ an expression like {"one": 1} and let compiler deduce it is a dictionary). It has features that C++ has lacked for many years: generator/coroutine, lazy range, and so on. Generics do not need special support, as it is dynamically typed (but it also does not surprise programmers by allowing error-prone expressions like "1" + 2, as in some script languages). With a good IDE, the argument on its lack of compile-time check can be crushed—programmers can enjoy edit-time checks. It has a big ecosystem with a huge number of third-party libraries, and they are easier to take and use than in C++ (thanks to pip). The only main remaining shortcoming to me is performance, but: 1) one may write C/C++ extensions where necessary; and 2) the lack of performance may not matter at all, if your application is not CPU-bound. See my personal experience of 25x performance boost in two hours.

I used Java a long time ago. I do not like it (mostly for its verbosity), and its desktop/server implementation makes it unsuitable for short-time applications due to its sluggish launch time. However, it has always been a workhorse on the server side, and it has a successful ecosystem, if not much harmed by Oracle’s lawyers. Android also brought life to the old language and the development communities (ignoring for now the bad effects Oracle has brought about).

C# started as Microsoft’s answer to Java, but they have differed more and more since then. I actually like C#, and my experience has shown it is very suitable for Windows application development (I do not have experience with Mono, and I don’t do server development on Windows). Many of its features, like LINQ and on-stack structs, are very likeable.

C is a simple and elegant language, and it can be regarded as the ancestor of three languages above (except Python), at least in syntax. It is the most widely supported. It is the closest to metal, and is still very popular in embedded systems, OS development, and cases where maximum portability is wanted (thus the wide offerings from the open-source communities). It is the most dangerous language, as you can easily have buffer overflows. Incidentally, two of the three current answers to ‘How do you store a list of names input by the user into an array in C (not C++ or C#)?’ can have buffer overflows (and I wrote the other answer). Programmers need to tend to many details themselves.

I myself will code everything in Python where possible, as it usually requires the fewest lines of code and takes the least amount of time. If performance is wanted, I’ll go to C++. For Windows GUI applications, I’ll prefer C#. I will write in C if maximum portability and memory efficiency are wanted. I do not feel I will write in Java, except modifying existing code or when the environment supports Java only.

[I first posted it as a Quora answer, but it is probably worth a page of its own.]

25x Performance Boost in Two Hours

Our system has a find_child_regions API, which, as the name indicates, can find subregions of a region up to a certain level. It needs to look up two MongoDB collections, combine the data in a certain structure, and return the result in JSON.

One day, it was reported that the API was slow for big data sets. Tests showed that it took more than 50 seconds to return close to 6000 records. Er . . . that means the average processing speed is only about 100 records a second—not terribly slow, but definitely not ideal.

When there is a performance problem, a profiler is always your friend.1 Profiling quickly revealed that a database read function was called about twice the number of returned records, and occupied the biggest chunk of time. The reason was that the function first found out all the IDs of the regions to return, and then it read all the data and generated the result. Since the data were already read once when the IDs were returned, they could be saved and reused. I had to write a new function, which resembled the function that returned region IDs, but returned objects that contained all the data read instead (we had such a class already). I also needed to split the result-generating function into two, so that either the region IDs, or the data objects, could be accepted. (I could not change these functions directly, as they have many other users than find_child_regions; changing all of them at once would have been both risky and unnecessary.)

In about 30 minutes, this change generated the expected improvement: call time was shortened to about 30 seconds. A good start!

While the improvement percentage looked nice, the absolute time taken was still a bit long. So I continued to look for further optimization chances.

Seeing that database reading was still the bottleneck and the database read function was still called for each record returned, I thought I should try batch reading. Fortunately, I found I only needed to change one function. Basically, I needed to change something like the following

result = []
for x in xs:
    object_id = f(x)
    obj = get_from_db(object_id, …)
    if obj:
        result.append(obj)
return result

to

object_ids = [f(x) for x in xs]
return find_in_db({"_id": {"$in": object_ids}}, …)

I.e. in that specific function, all data of one level of subregions were read in one batch. Getting four levels of subregions took only four database reads, instead of 6000. This reduced the latency significantly.

In 30 minutes, the call time was again reduced, from 30 seconds to 14 seconds. Not bad!

Again, the profiler showed that database reading was still the bottleneck. I made more experiments, and found that the data object could be sizeable, whereas we did not always need all data fields. We might only need, say, 100 bytes from each record, but the average size of each region was more than 50 KB. The functions involved always read the full record, something equivalent to the traditional SQL statement ‘SELECT * FROM ...’. It was convenient, but not efficient. MongoDB APIs provided a projection parameter, which allowed callers to specify which fields to read from the collection, so I tried it. We had the infrastructure in place, and it was not very difficult. It took me about an hour to make it fully work, as many functions needed to be changed to pass the (optional) projection/field names around. When it finally worked, the result was stunning: if one only needed the basic fields about the regions, the call time could be less than 2 seconds. Terrific!

While Python is not a performant language, and I still like C++, I am glad that Python was chosen for this project. The performance improvement by the C++ language would have been negligible when the call time was more than 50 seconds, and still a small number when I improved its performance to less than 2 seconds. In the meanwhile, it would have been simply impossible for me to refactor the code and achieve the same performance in two hours if the code had been written in C++. I highly doubt whether I could have finished the job in a full day. I would probably have been fighting with the compiler and type system most of the time, instead of focusing on the logic and testing.

Life is short—choose your language wisely.


  1. Being able to profile Python programs easily was actually the main reason I purchased a professional licence of PyCharm, instead of just using the Community Edition. 

Pipenv and Relocatable Virtual Environments

Pipenv is a very useful tool to create and maintain independent Python working environments. Using it feels like a breeze. There are enough online tutorials about it, and I will only talk about one specific thing in this article: how to move a virtual environment to another machine.

The reason I need to make virtual environments movable is that our clients do not usually allow direct Internet access in production environments, therefore we cannot install packages from online sources on production servers. They also often enforce a certain directory structure. So we need to prepare the environment in our test environment, and it would be better if we did not need to worry about where we put the result on the production server. Virtual environments, especially with the help of Pipenv, seem to provide a nice and painless way of achieving this effect—if we can just make the result of pipenv install movable, or, in the term of virtualenv, relocatable.

virtualenv is already able to make most of the virtual environment relocatable. When working with Pipenv, it can be as simple as

virtualenv --relocatable `pipenv --venv`

There are two problems, though:

They are not difficult to solve, and we can conquer them one by one.

As pointed out in the issue discussion, one only needs to replace one line in activate to make it relocatable. What is originally

VIRTUAL_ENV="/home/yongwei/.local/share/virtualenvs/something--PD5l8nP"

should be changed to

VIRTUAL_ENV=$(cd $(dirname "$BASH_SOURCE"); dirname `pwd`)

To be on the safe side, I would look for exactly the same line and replace it, so some sed tricks are needed. I also need to take care of the differences between BSD sed and GNU sed, but it is a problem already solved before.

The second problem is even easier. Creating a new relative symlink solves the problem.

I’ll share the final result here, a simple script that can make a virtual environment relocatable, as well as creating a tarball from it. The archive has ‘-venv-platform’ as the suffix, but it does not include a root directory. Keep this in mind when you unpack the tarball.

#!/bin/sh

case $(sed --version 2>&1) in
  *GNU*) sed_i () { sed -i "$@"; };;
  *) sed_i () { sed -i '' "$@"; };;
esac

sed_escape() {
  echo $1|sed -e 's/[]\/$*.^[]/\\&/g'
}

VENV_PATH=`pipenv --venv`
if [ $? -ne 0 ]; then
  exit 1
fi
virtualenv --relocatable "$VENV_PATH"

VENV_PATH_ESC=`sed_escape "$VENV_PATH"`
RUN_PATH=`pwd`
BASE_NAME=`basename "$RUN_PATH"`
PLATFORM=`python -c 'import sys; print(sys.platform)'`
cd "$VENV_PATH"
sed_i "s/^VIRTUAL_ENV=\"$VENV_PATH_ESC\"/VIRTUAL_ENV=\$(cd \$(dirname \"\$BASH_SOURCE\"); dirname \`pwd\`)/" bin/activate
[ -h lib64 ] && rm -f lib64 && ln -s lib lib64
tar cvfz $RUN_PATH/$BASE_NAME-venv-$PLATFORM.tar.gz .

After running the script, I can copy result tarball to another machine of the same OS, unpack it, and then either use the activate script or set the PYTHONPATH environment variable to make my Python program work. Problem solved.

A last note: I have not touched activate.csh and activate.fish, as I do not use them. If you did, you would need to update the script accordingly. That would be your homework as an open-source user. 😼


  1. I tried removing it, and Pipenv was very unhappy. 

Fixing A VS2017 15.6 Installation Problem

After installing the latest Visual Studio 2017 15.6.6 (Community Edition), I found my custom setting of environment variables INCLUDE lost effect in the Developer Command Prompt. Strangely, LIB was still there. Some tracing indicated that it was a bug in the .BAT files Microsoft provided to initialize the environment. The offending lines are the following (in C:\Program Files (x86)\Microsoft Visual Studio\2017\Community\Common7\Tools\vsdevcmd\core\winsdk.bat; one of them is very long and requires scrolling for viewing):

@REM the folowing are architecture neutral
set __tmpwinsdk_include=
if "%INCLUDE%" NEQ "" set "__tmp_include=;%INCLUDE%"
set "INCLUDE=%WindowsSdkDir%include\%WindowsSDKVersion%shared;%WindowsSdkDir%include\%WindowsSDKVersion%um;%WindowsSdkDir%include\%WindowsSDKVersion%winrt;%WindowsSdkDir%include\%WindowsSDKVersion%cppwinrt%__tmpwinsdk_include%"
set __tmpwinsdk_include=

Apparently somebody missed renaming __tmp_include to __tmpwinsdk_include. Doing that myself fixed the problem.

I’ve reported the problem to Microsoft. In the meanwhile, you know how to fix it if you encounter the same problem.

A Journey of Purely Static Linking

As I mentioned last time, I found Microsoft has really messed up its console Unicode support when the C runtime DLL (as versus the static runtime library) is used. So I decided to have a try with linking everything statically in my project that uses C++ REST SDK (a.k.a. cpprestsdk). This is not normally recommended, but in my case it has two obvious advantages:

  • It would solve the Unicode I/O problem.
  • It would be possible to ship just the binaries without requiring the target PC to install the Microsoft Visual C++ runtime.

It took me several hours to get it rolling, but I felt it was worthwhile.

Before I start, I need to mention that cpprestsdk has a solution file that supports building a static library. It turned out not satisfactory:

  • It used NuGet packages for Boost and OpenSSL, and both versions were out of date. Worse, my Visual Studio 2017 IDE hung while I tried to update the packages. Really a nuisance.
  • The static library, as well as all its dependencies like Boost and OpenSSL, still uses the C runtime DLL. I figured it might be easier to go completely on my own.

Prerequisites

Boost

This part is straightforward. After going into the Boost directory, I only need to type (I use version 1.65.1):

bootstrap.bat
.\b2.exe toolset=msvc -j 2 --with-chrono --with-date_time --with-regex --with-system --with-thread release link=static runtime-link=static stage
.\b2.exe toolset=msvc -j 2 --with-chrono --with-date_time --with-regex --with-system --with-thread release link=shared stage

(The last line is needed only because ‘cmake ..’ would otherwise fail to detect Boost in the ‘Building C++ REST SDK’ stage.)

OpenSSL

As I already have Perl and NASM installed, installing OpenSSL is trivial too (I use version 1.0.2l):

perl Configure VC-WIN32 --prefix=C:/Libraries/OpenSSL
ms\do_nasm
nmake -f ms\nt.mak
nmake -f ms\nt.mak install

zlib

This part requires a small change to the build script (for version 1.2.11). I need to open win32\Makefile.msc and change all occurrences of ‘-MD’ to ‘-MT’. Then these commands will work:

nmake -f win32\Makefile.msc zlib.lib
mkdir C:\Libraries\zlib
mkdir C:\Libraries\zlib\include
mkdir C:\Libraries\zlib\lib
copy zconf.h C:\Libraries\zlib\include
copy zlib.h C:\Libraries\zlib\include
copy zlib.lib C:\Libraries\zlib\lib

Building C++ REST SDK

We need to set some environment variables to help the CMake build system find where the libraries are. I set them in ‘Control Panel > System > Advanced system settings > Environment variables’:1

BOOST_ROOT=C:\src\boost_1_65_1
OPENSSL_ROOT_DIR=C:\Libraries\OpenSSL
INCLUDE=C:\Libraries\zlib\include
LIB=C:\Libraries\zlib\lib

(The above setting assumes Boost is unpacked under C:\src.)

We would need to create the solution files for the current environment under cpprestsdk:

cd Release
mkdir build
cd build
cmake ..

If the environment is set correctly, the last command should succeed and report no errors. A cpprest.sln should be generated now.

We then open this solution file. As we only need the release files, we should change the ‘Solution Configuration’ from ‘Debug’ to ‘Release’. After that, we need to find the project ‘cpprest’ in ‘Solution Explorer’, go to its ‘Properties’, and make the following changes under ‘General’:

  • Set Target Name to ‘cpprest’.
  • Set Target Extension to ‘.lib’.
  • Set Configuration Type to ‘Static library (.lib)’.

And the most important change we need under ‘C/C++ > Code Generation’:

  • Set Runtime Library to ‘Multi-threaded (/MT)’.

Click on ‘OK’ to accept the changes. Then we can build this project.

Like zlib, we need to copy the header and library files to a new path, and add the include and lib directories to environment variables INCLUDE and LIB, respectively. In my case, I have:

INCLUDE=C:\Libraries\cpprestsdk\include;C:\Libraries\zlib\include
LIB=C:\Libraries\cpprestsdk\lib;C:\Libraries\zlib\lib

Change to my Project

Of course, the cpprestsdk-based project needs to be adjusted too. I will first show the diff, and then give some explanations:

--- a/CMakeLists.txt
+++ b/CMakeLists.txt
@@ -24,14 +24,25 @@ set(CMAKE_CXX_FLAGS "${ELPP_FLAGS}")

 if(WIN32)
 set(CMAKE_CXX_FLAGS "${CMAKE_CXX_FLAGS}\
- -DELPP_UNICODE -D_WIN32_WINNT=0x0601")
-set(USED_LIBS Boost::dynamic_linking ${Boost_DATE_TIME_LIBRARY} ${Boost_SYSTEM_LIBRARY} ${Boost_THREAD_LIBRARY})
+ -DELPP_UNICODE -D_WIN32_WINNT=0x0601 -D_NO_ASYNCRTIMP")
+set(USED_LIBS Winhttp httpapi bcrypt crypt32 zlib)
 else(WIN32)
 set(USED_LIBS "-lcrypto" OpenSSL::SSL ${Boost_CHRONO_LIBRARY} ${Boost_SYSTEM_LIBRARY} ${Boost_THREAD_LIBRARY})
 endif(WIN32)

 if(MSVC)
 set(CMAKE_CXX_FLAGS "${CMAKE_CXX_FLAGS} /EHsc /W3")
+set(CompilerFlags
+        CMAKE_CXX_FLAGS
+        CMAKE_CXX_FLAGS_DEBUG
+        CMAKE_CXX_FLAGS_RELEASE
+        CMAKE_C_FLAGS
+        CMAKE_C_FLAGS_DEBUG
+        CMAKE_C_FLAGS_RELEASE)
+foreach(CompilerFlag ${CompilerFlags})
+  string(REPLACE "/MD" "/MT" ${CompilerFlag}
+         "${${CompilerFlag}}")
+endforeach()
 else(MSVC)
 set(CMAKE_CXX_FLAGS "${CMAKE_CXX_FLAGS} -W -Wall -Wfatal-errors")
 endif(MSVC)

There are two blocks of changes. In the first block, one can see that the Boost libraries are no longer needed, but, instead, one needs to link the Windows dependencies of cpprestsdk (I found the list in Release\build\src\cpprest.vcxproj), as well as zlib. One also needs to explicitly define _NO_ASYNCRTIMP so that the cpprestsdk functions will be not treated as dllimport.

As CMake defaults to using ‘/MD’, the second block of changes replaces all occurrences of ‘/MD’ with ‘/MT’ in the compiler flags.2 With these changes, I am able to generate an executable without any external dependencies.

A Gotcha

I am now used to using cmake without specifying the ‘-G’ option on Windows. By default, CMake generates Visual Studio project files: they have several advantages, including multiple configurations (selectable on the MSBuild command line like ‘/p:Configuration=Release’), and parallel building (say, using ‘/m:2’ to take advantage of two processor cores). Neither is possible with nmake. However, the executables built by this method still behave abnormally regarding outputting non-ASCII characters. Actually, I believe everything is still OK at the linking stage, but the build process then touches the executables in some mysterious way and the result becomes bad. I am not familiar with MSBuild well enough to manipulate the result, so I am going back to using ‘cmake -G "NMake Makefiles" -DCMAKE_BUILD_TYPE=Release’ followed by ‘nmake’ for now.


  1. CMake can recognize Boost and OpenSSL by some known environment variables. I failed to find one that really worked for zlib, so the INCLUDE and LIB variables need to be explicitly set. 
  2. This technique is shamelessly copied from a StackOverflow answer

Another Microsoft Unicode I/O Problem

I encountered an annoying bug in Visual C++ 2017 recently. It started when I found my favourite logging library, Easylogging++, output Chinese as garbage characters on Windows. Checking the documentation carefully, I noted that I should have used the macro START_EASYLOGGINGPP. It turned out to be worse: all output starting from the Chinese character was gone. Puzzled but busy, I put it down and worked on something else.

I spend another hour of detective work on this issue today. The result was quite surprising.

  • First, it is not an issue with Easylogging++. The problem can occur if I purely use std::wcout.
  • Second, the magical thing about START_EASYLOGGINGPP is that it will invoke std::locale::global(std::locale("")). This is the switch that leads to the different behaviour.
  • Myteriously, with the correct locale setting, I can get the correct result with both std::wcout and Easylogging++ in a test program. I was not able to get it working in my real project.
  • Finally, it turns out that the difference above is caused by /MT vs. /MD! The former (default if neither is specified on the command line) tells the Visual C++ compiler to use the static multi-threading library, and the latter (set by default in Visual Studio projects) tells the compiler to use the dynamic multi-threading library.

People may remember that I wrote about MSVCRT.DLL Console I/O Bug. While Visual C++ 2013 shows consistent behaviour between /MT and /MD, Visual C++ 2015 and 2017 exhibit the same woeful bug when /MD is specified on the command line. This is something perplexingly weird: it seems someone at Microsoft messed up with the MSVCRT.DLL shipped with Windows first (around 2006), and then the problem spread to the Visual C++ runtime DLL after nearly a decade!

I am using many modern C++ features, so I definitely do not want to go back to Visual C++ 2013 for new projects. It seems I have to tolerate garbage characters in the log for now. Meanwhile, I submitted a bug to Microsoft. Given that I have a bug report that is deferred for four years, I am not very hopeful. But let us wait and see.

Update (20 December 2017)

A few more tests show that the debug versions (/MTd and /MDd) both work well. So only the default release build (using the dynamic C runtime) exhibits this problem, where the executable depends on DLLs like api-ms-win-crt-stdio-l1-1-0.dll. It seems this issue is related to the Universal C Runtime introduced in Visual Studio 2015 and Windows 10. . . .

Update (25 March 2018)

The bug was closed, and a Microsoft developer indicated that the issue had already been fixed since the Windows 10 Anniversary Update SDK (build 10.0.14393). Actually I had had build 10.0.15063 installed. The reason why I still saw the problem was that the Universal C Runtime on Windows 7 had not been updated (‘the issue will be fixed in a future update to the Universal C Runtime on Windows 7’), and I should not have seen the problem on a Windows 10 box. The current workaround is either use static linking (as I did), or copy the redistributable DLLs under C:\Program Files (x86)\Windows Kits\10\Redist\ucrt\DLLs\x86 (or x64 etc.) to the app directory (so called ‘app-local deployment’; which should not be used on Windows 10, as the system version is always preferred). My test showed that copying ucrtbase.dll was enough to fix my test case.

C/C++ Performance, mmap, and string_view

A C++ discussion group I participated in got a challenge last week. Someone posted a short Perl program, and claimed that it was faster than the corresponding C version. It was to this effect (with slight modifications):

open IN, "$ARGV[0]";
my $gc = 0;
while (my $line = ) {
  $gc += ($line =~ tr/cCgG//);
}
print "$gc\n";
close IN

The program simply searched and counted all occurrences of the letters ‘C’ and ‘G’ from the input file in a case-insensitive way. Since the posted C code was incomplete, I wrote a naïve implementation to test, which did turn out to be slower than the Perl code. It was about two times as slow on Linux, and about 10 times as slow on macOS.1

FILE* fp = fopen(argv[1], "rb");
int count = 0;
int ch;
while ((ch = getc(fp)) != EOF) {
    if (ch == 'c' || ch == 'C' || ch == 'g' || ch == 'G') {
        ++count;
    }
}
fclose(fp);

Of course, it actually shows how optimized the Perl implementation is, instead of how inferior the C language is. Before I had time to test my mmap_line_reader, another person posted an mmap-based solution, to the following effect (with modifications):

int fd = open(argv[1], O_RDONLY);
struct stat st;
fstat(fd, &st);
int len = st.st_size;
char ch;
int count = 0;
char* ptr = mmap(NULL, len, PROT_READ, MAP_PRIVATE, fd, 0);
close(fd);
char* begin = ptr;
char* end = ptr + len;
while (ptr < end) {
    ch = *ptr++;
    if (ch == 'c' || ch == 'C' || ch == 'g' || ch == 'G')
        ++count;
}
munmap(begin, len);

When I tested my mmap_line_reader, I found that its performance was only on par with the Perl code, but slower than the handwritten mmap-based code. It is not surprising, considering that mmap_line_reader copies the line content, while the C code above searches directly in the mmap’d buffer.

I then thought of the C++17 string_view.2 It seemed a good chance of using it to return a line without copying its content. It was actually easy refactoring (on code duplicated from mmap_line_reader), and most of the original code did not require any changes. I got faster code for this test, but the implementations of mmap_line_reader and the new mmap_line_reader_sv were nearly identical, except for a few small differences.

Naturally, the next step was to refactor again to unify the implementations. I made a common base to store the bottommost mmap-related logic, and made the difference between string and string_view disappear with a class template. Now mmap_line_reader and mmap_line_reader_sv were just two aliases of instantiations of basic_mmap_line_reader!

While mmap_line_reader_sv was faster than mmap_line_reader, it was still slower than the mmap-based C code. So I made another abstraction, a ‘container’ that allowed iteration over all of the file content. Since the mmap-related logic was already mostly separated, only some minor modifications were needed to make that base class fully independent of the line reading logic. After that, adding mmap_char_reader was easy, which was simply a normal container that did not need to mess with platform-specific logic.

At this point, all seemed well—except one thing: it only worked on Unix. I did not have an immediate need to make it work on Windows, but I really wanted to show that the abstraction provided could work seamlessly regardless of the platform underneath. After several revisions, in which I dealt with proper Windows support,3 proper 32- and 64-bit support, and my own mistakes, I finally made it work. You can check out the current code in the nvwa repository. With it, I can finally make the following simple code work on Linux, macOS, and Windows, with the same efficiency as raw C code when fully optimized:4

#include <iostream>
#include <stdlib.h>
#include "nvwa/mmap_byte_reader.h"

int main(int argc, char* argv[])
{
    if (argc != 2) {
        std::cerr << "A file name is needed" << std::endl;
        exit(1);
    }

    try {
        int count = 0;
        for (char ch : nvwa::mmap_char_reader(argv[1]))
            if (ch == 'c' || ch == 'C' ||
                ch == 'g' || ch == 'G')
                ++count;
        std::cout << count << std::endl;
    }
    catch (std::exception& e) {
        std::cerr << e.what() << std::endl;
    }
}

Even though it is not used in the final code, I love the C++17 string_view. And I like the simplicity I finally achieved. Do you?

P.S. The string_view-based test code is posted here too as a reference. Line-based processing is common enough!5

#include <iostream>
#include <stdlib.h>
#include "nvwa/mmap_line_reader.h"

int main(int argc, char* argv[])
{
    if (argc != 2) {
        std::cerr << "A file name is needed" << std::endl;
        exit(1);
    }

    try {
        int count = 0;
        for (const auto& line :
                nvwa::mmap_line_reader_sv(argv[1]))
            for (char ch : line)
                if (ch == 'c' || ch == 'C' ||
                    ch == 'g' || ch == 'G')
                    ++count;
        std::cout << count << std::endl;
    }
    catch (std::exception& e) {
        std::cerr << e.what() << std::endl;
    }
}

Update (18 September 2017)

Thanks to Alex Maystrenko (see the comments below), It is now understood that the reason why getc was slow was because there was an implicit lock around file operations. I did not expect it, as I grew from an age when multi-threading was the exception, and I had not learnt about the existence of getc_unlocked until he mentioned it! According to the getc_unlocked page in the POSIX specification:

Some I/O functions are typically implemented as macros for performance reasons (for example, putc() and getc()). For safety, they need to be synchronized, but it is often too expensive to synchronize on every character. Nevertheless, it was felt that the safety concerns were more important; consequently, the getc(), getchar(), putc(), and putchar() functions are required to be thread-safe. However, unlocked versions are also provided with names that clearly indicate the unsafe nature of their operation but can be used to exploit their higher performance.

After replacing getc with getc_unlocked, the naïve implementation immediately outperforms the Perl code on Linux, though not on macOS.6

Another interesting thing to notice is that GCC provides vastly optimized code for the comparison with ‘c’, ‘C’, ‘g’, and ‘G’,7 which is extremely unlikely for interpreted languages like Perl. Observing the codes for the characters are:

  • 010000112 or 6710 (‘C’)
  • 010001112 or 7110 (‘G’)
  • 011000112 or 9910 (‘c’)
  • 011001112 or 10310 (‘g’)

GCC basically ANDs the input character with 110110112, and compares the result with 010000112. In Intel-style assembly:

        movzx   ecx, BYTE PTR [r12]
        and     ecx, -37
        cmp     cl, 67

It is astoundingly short and efficient!


  1. I love my Mac, but I do feel Linux has great optimizations. 
  2. If you are not familiar with string_view, check out its reference
  3. Did I mention how unorthogonal and uncomfortable the Win32 API is, when compared with POSIX? I am very happy to hide all the ugliness from the application code. 
  4. Comparison was only made on Unix (using the POSIX mmap API), under GCC with ‘-O3’ optimization (‘-O2’ was not enough). 
  5. -std=c++17’ must be specified for GCC, and ‘/std:c++latest’ must be specified for Visual C++ 2017. 
  6. However, the performance improvement is more dramatic on macOS, from about 10 times slower to 5% slower. 
  7. Of course, this is only possible when the characters are given as compile-time constants. 

Annoying Vim Behaviour on Ubuntu 16.04

When I started using Ubuntu 16.04 LTS, I was overall happy with it, except for one thing. Its Vim installation behaved strangely, to say the least. On the first look, it tried to be user-friendly: it automatically provided pop-up prompts when I typed something. It also tried to be smart, e.g. path candidates were provided when I was typing a path. Sounds great? Er . . . actually no, at least not to experienced Vim users.

The problem with it was that it was far too intrusive. If what I was typing had at least one autocompletion candidate when I typed Enter, the first candidate was chosen, instead of inserting a newline. What if I really meant to insert a newline? Oh, I had to press Ctrl-E to cancel the autocompletion first (by the way, this is a Vim command I had to do a Google search to find; before that I used Esc followed by o).

Now think about it: Which thing does an advanced user do more often, consulting the screen and choosing an autocompletion candidate, or continuously typing onto newlines, probably without even looking at the screen? When we do need to autocomplete, we all know well how to invoke autocompletion by Ctrl-P and Ctrl-N.

I tolerated this behaviour for a few months. Today I decided I really needed to get rid of it. I first searched the web, but did not see any obvious results. After a few experiments, I checked the /usr/share/vim/vim74/plugin directory, and immediately found a likely suspect—acp.vim! It turned out to be the AutoComplPop plugin. After checking its code, I simply inserted the following line into my .vimrc file:

let acp_enableAtStartup=0

Problem solved, and I am feeling happy again!

I do not know whether Canonical thought carefully about enabling this plugin by default, which might help new users but hurt experienced users. Maybe they thought experienced users could find a way out. I guess it might be true, but I wasted 20 minutes finding the solution, and then spent another hour writing up this article. . . . I do hope this article can help another disgruntled Vim user on Ubuntu. 🙂