Compile-Time Strings

I have encountered many compile-time uses of strings in my projects in the past few years. I would like to summarize my experience today.

Choice of Types

std::string is mostly unsuitable for compile-time string manipulations. There are several reasons:

  • Before C++20 one cannot use strings at all at compile time. In addition, the support for compile-time strings comes quite late among the major compilers. MSVC was the front runner in this regard, GCC came second with GCC 12 (released a short while ago), and Clang has not yet had a formal release with compile-time string support.
  • With C++20 one can use strings at compile time, but there are still a lot of inconveniences, the most obvious being that strings generated at compile time cannot be used at run time. Besides, a string cannot be declared constexpr.
  • A string cannot be used as a template argument.

So we have to give up this apparent choice, but explore other possibilities. The candidates are:

  • const char pointer, which is what a string literal can naturally decay to
  • string_view, a powerful tool added by C++17: it has similar member functions to those of string, but they are mostly marked as constexpr!
  • array, with which we can generate brand-new strings

We will try these types in the following discussion.

Functions Commonly Needed

Getting the String Length

One of the most basic functions on a string is getting its length. Here we cannot use the C function strlen, as it is not constexpr.

We will try several different ways to implement it.

First, we can implement strlen manually, and mark the function constexpr:

namespace strtools {

constexpr size_t length(const char* str)
{
    size_t count = 0;
    while (*str != '\0') {
        ++str;
        ++count;
    }
    return count;
}

} // namespace strtools

However, is there an existing mechanism to retrieve the length of a string in the standard library? The answer is a definite Yes. The standard library does support getting the length of a string of any of the standard character types, like char, wchar_t, etc. With the most common character type char, we can write:

constexpr size_t length(const char* str)
{
    return char_traits<char>::length(str);
}

Starting with C++17, the methods of char_traits can be used at compile time. (However, you may encounter problems with older compiler versions, like GCC 8.)

Assuming you can use C++17, string_view is definitely worth a try:

constexpr size_t length(string_view sv)
{
    return sv.size();
}

Regardless of the approach used, now we can use the following code to verify that we can indeed check the length of a string at compile time:

static_assert(strtools::length("Hi") == 2);

At present, the string_view implementation seems the most convenient.

Finding a Character

Finding a specific character is also quite often needed. We can’t use strchr, but again, we can choose from a few different implementations. The code is pretty simple, whether implemented with char_traits or with string_view.

Here is the version with char_traits:

constexpr const char* find(const char* str, char ch)
{
    return char_traits<char>::find(str, length(str),
                                   ch);
}

Here is the version with string_view:

constexpr string_view::size_type find(string_view sv,
                                      char ch)
{
    return sv.find(ch);
}

I am not going to show the manual lookup code this time. (Unless you have to use an old compiler, simpler is better.)

Comparing Strings

The next functions are string comparisons. Here string_view wins hands down: string_view supports the standard comparisons directly, and you do not need to write any code.

Getting Substrings

It seems that string_views are very convenient, and we should use string_views wherever possible. However, is string_view::substr enough for getting substrings? This is difficult to answer without an actual usage scenario. One real scenario I encountered in projects was that the __FILE__ macro may contain the full path at compile time, resulting in different binaries when compiling under different paths. We wanted to truncate the path completely so that the absolute paths would not show up in binaries.

My tests showed that string_view::substr could not handle this job. With the following code:

puts("/usr/local"sv.substr(5).data());

We will see assembly output like the following from the compiler (see https://godbolt.org/z/1dssd96vz):

.LC0:
        .string "/usr/local"
        …
        mov     edi, OFFSET FLAT:.LC0+5
        call    puts

We have to find another way. . . .

Let’s try array. It’s easy to think of code like the following:

constexpr auto substr(string_view sv, size_t offset,
                      size_t count)
{
    array<char, count + 1> result{};
    copy_n(&sv[offset], count, result.data());
    return result;
}

The intention of the code should be very clear: generate a brand-new character array of the requested size and zero it out (constexpr variables must be initialized on declaration before C++20); copy what we need; and then return the result. Unfortunately, the code won’t compile. . . .

There are two problems in the code:

  • Functions parameters are not constexpr, and cannot be used as template arguments.
  • copy_n is not constexpr before C++20, and cannot be used in compile-time programming.

The second problem is easy to fix: a manual loop will do. We shall focus on the first problem.

A constexpr function can be evaluated at compile time or at run time, so its function arguments are not treated as compile-time constants, and cannot be used in places where compile-time constants are required, such as template arguments.

Furthermore, this problem still exists with the C++20 consteval function, where the function is only invoked at compile time. The main issue is that if we allow function parameters to be used as compile-time constants, then we can write a function where its arguments of different values (same type) can produce return values of different types. For example (currently illegal):

consteval auto make_constant(int n)
{
    return integral_constant<int, n>{};
}

This is unacceptable in the current type system: we still require that the return values of a function have a unique type. If we want a value to be used as a template argument inside a function, it must be passed to the function template as a template argument (rather than as a function argument to a non-template function). In this case, each distinct template argument implies a different template specialization, so the issue of a multiple-return-type function does not occur.

By the way, a standard proposal P1045 tried to solve this problem, but its progress seems stalled. As there are workarounds (to be discussed below), we are still able to achieve the desired effect.

Let’s now return to the substr function and convert the count parameter into a template parameter. Here is the result:

template <size_t Count>
constexpr auto substr(string_view sv, size_t offset = 0)
{
    array<char, Count + 1> result{};
    for (size_t i = 0; i < Count; ++i) {
        result[i] = sv[offset + i];
    }
    return result;
}

The code can really work this time. With ‘puts(substr("/usr/local", 5).data())’, we no longer see "/usr/" in the compiler output.


Regretfully, we now see how compilers are challenged with abstractions: With the latest versions of GCC (12.1) and MSVC (19.32) on Godbolt, this version of substr does not generate the optimal output. There are also some compatibility issues with older compiler versions. So, purely from a practical point of view, I recommend the following implementation that does not use string_view:

template <size_t Count>
constexpr auto substr(const char* str,
                      size_t offset = 0)
{
    array<char, Count + 1> result{};
    for (size_t i = 0; i < Count; ++i) {
        result[i] = str[offset + i];
    }
    return result;
}

If you are interested, you can compare the assembly outputs of these two different versions of the code:

Only Clang is able to generate the same efficient assembly code with both versions:

        mov     word ptr [rsp + 4], 108
        mov     dword ptr [rsp], 1633906540
        mov     rdi, rsp
        call    puts

If you don’t understand why there are the numbers 108 and 1633906540, let me remind you that the hexadecimal representations of these two numbers are 0x6C and 0x61636F6C, respectively. Check the ASCII table and you should be able to understand.


Since we stopped using string_view in the function parameters, the parameter offset becomes much less useful. Hence, I will get rid of this parameter, and rename the function to copy_str:

template <size_t Count>
constexpr auto copy_str(const char* str)
{
    array<char, Count + 1> result{};
    for (size_t i = 0; i < Count; ++i) {
        result[i] = str[i];
    }
    return result;
}

Passing Arguments at Compile Time

When you try composing the compile-time functions together, you will find something lacking. For example, if you wanted to remove the first segment of a path automatically (like from "/usr/local" to "local"), you might try some code like the following:

constexpr auto remove_head(const char* path)
{
    if (*path == '/') {
        ++path;
    }
    auto start = find(path, '/');
    if (start == nullptr) {
        return copy_str<length(path)>(path);
    } else {
        return copy_str<length(start + 1)>(start + 1);
    }
}

The problem is still that it won’t compile. And did you notice that this code violates exactly the constraint I mentioned above that the return type of a function must be consistent and unique?

I have adopted a solution described by Michael Park: using lambda expressions to encapsulate ‘compile-time arguments’. I have defined three macros for convenience and readability:

#define CARG typename
#define CARG_WRAP(x) [] { return (x); }
#define CARG_UNWRAP(x) (x)()

‘CARG’ means ‘constexpr argument’, a compile-time constant argument. We can now make make_constant really work:

template <CARG Int>
constexpr auto make_constant(Int cn)
{
    constexpr int n = CARG_UNWRAP(cn);
    return integral_constant<int, n>{};
}

And it is easy to verify that it works:

auto result = make_constant(CARG_WRAP(2));
static_assert(std::is_same_v<integral_constant<int, 2>,
                             decltype(result)>);

A few explanations follow. In the template parameter, I use CARG (instead of typename) for code readability: it indicates the intention that the template parameter is essentially a type wrapper for compile-time constants. Int is the name of this special type. We will not provide this type when instantiating the function template, but instead let the compiler deduce it. When calling the ‘function’ (make_constant(CARG_WRAP(2))), we provide a lambda expression ([] { return (2); }), which encapsulates the constant we need. When we need to use this parameter, we use CARG_UNWRAP (evaluate: [] { return (2); }()) to get the constant back.

Now we can rewrite the remove_head function:

template <CARG Str>
constexpr auto remove_head(Str cpath)
{
    constexpr auto path = CARG_UNWRAP(cpath);
    constexpr int skip = (*path == '/') ? 1 : 0;
    constexpr auto pos = path + skip;
    constexpr auto start = find(pos, '/');
    if constexpr (start == nullptr) {
        return copy_str<length(pos)>(pos);
    } else {
        return copy_str<length(start + 1)>(start + 1);
    }
}

This function is similar in structure to the previous version, but there are many detail changes. In order to pass the result to copy_str as a template argument, we have to use constexpr all the way along. So we have to give up mutability, and write code in a quite functional style.

Does it really work? Let’s put the following statement into the main function:

puts(strtools::remove_head(CARG_WRAP("/usr/local"))
         .data());

And here is the optimized assembly output from GCC on x86-64 (see https://godbolt.org/z/Mv5YanPvq&gt;):

main:
        sub     rsp, 24
        mov     eax, DWORD PTR .LC0[rip]
        lea     rdi, [rsp+8]
        mov     DWORD PTR [rsp+8], eax
        mov     eax, 108
        mov     WORD PTR [rsp+12], ax
        call    puts
        xor     eax, eax
        add     rsp, 24
        ret
.LC0:
        .byte   108
        .byte   111
        .byte   99
        .byte   97

As you can see clearly, the compiler will put the ASCII codes for "local" on the stack, assign its starting address to the rdi register, and then call the puts function. There is absolutely no trace of "/usr/" in the output. In fact, there is no difference between the output of the puts statement above and that of ‘puts(substr("/usr/local", 5).data())’.

I would like to remind you that it is safe to pass and store the character array, but it is not safe to store the pointer obtained from its data() method. It is possible to use such a pointer immediately in calling other functions (like puts above), as the lifetime of array will extend till the current statement finishes execution. However, if you saved this pointer, it would become dangling after the current statement, and dereferencing it would then be undefined behaviour.

String Template Parameters

We have tried turning strings into types (via lambda expressions) for compile-time argument passing, but unlike integers and integral_constants, there is no one-to-one correspondence between the two. This is often inconvenient: for two integral_constants, we can directly use is_same to determine whether they are the same; for strings represented as lambda expressions, we cannot do the same—two lambda expressions always have different types.

Direct use of string literals as non-type template arguments is not allowed in C++, because strings may appear repeatedly in different translation units, and they do not have proper comparison semantics—comparing two strings is just a comparison of two pointers, which cannot achieve what users generally expect. To use string literals as template arguments, we need to find a way to pass the string as a sequence of characters to the template. We have two methods available:

  • The non-standard GNU extension used by GCC and Clang (which can be used prior to C++20)
  • The C++20 approach suitable for any conformant compilers (including GCC and Clang)

Let’s have a look one by one.

The GNU Extension

GCC and Clang have implemented the standard proposal N3599, which allows us to use strings as template arguments. The compiler will expand the string into characters, and the rest is standard C++.

Here is an example:

template <char... Cs>
struct compile_time_string {
    static constexpr char value[]{Cs..., '\0'};
};

template <typename T, T... Cs>
constexpr compile_time_string<Cs...> operator""_cts()
{
    return {};
}

The definition of the class template is standard C++, so that compile_time_string is a valid type and, at the same time, by taking the value member of this type, we can get "Hi". The GNU extension is the string literal operator template—we can now write ‘"Hi"_cts’ to get an object of type compile_time_string. The following code will compile with the above definitions:

constexpr auto a = "Hi"_cts;
constexpr auto b = "Hi"_cts;
static_assert(is_same_v<decltype(a), decltype(b)>);

The C++20 Approach

Though the above method is simple and effective, it failed to reach consensus in the C++ standards committee and did not become part of the standard. However, with C++20, we can use more types in non-type template parameters. In particular, user-defined literal types are amongst them. Here is an example:

template <size_t N>
struct compile_time_string {
    constexpr compile_time_string(const char (&str)[N])
    {
        copy_n(str, N, value);
    }
    char value[N]{};
};

template <compile_time_string cts>
constexpr auto operator""_cts()
{
    return cts;
}

Again, the first class template is not special, but allowing this compile_time_string to be used as the type of a non-type template parameter (quite a mouthful😝), as well as the string literal operator template, is a C++20 improvement. We can now write ‘"Hi"_cts’ to generate a compile_time_string object. Note, however, that this object is of type compile_time_string, so "Hi"_cts and "Ha"_cts are of the same type—which is very different from the results of the GNU extension. However, the important thing is that compile_time_string can now be used as type of a template parameter, so we can just add another layer:

template <compile_time_string cts>
struct cts_wrapper {
    static constexpr compile_time_string str{cts};
};

Corresponding to the previous compile-time string type comparison, we now need to write:

auto a = cts_wrapper<"Hi"_cts>{};
auto b = cts_wrapper<"Hi"_cts>{};
static_assert(is_same_v<decltype(a), decltype(b)>);

Or we can further simplify it to (as compile_time_string has a non-explicit constructor):

auto a = cts_wrapper<"Hi">{};
auto b = cts_wrapper<"Hi">{};
static_assert(is_same_v<decltype(a), decltype(b)>);

Summary

In this blog I have discussed two things:

  • Compile-time string manipulations
  • Strings as non-type template parameters

They have proved to be useful in my real projects. When having time, I will explore some usages later. Stay tuned!

Contextual Memory Tracing

The Need

A long, long time ago I wrote about memory debugging. I redefined new as a macro, took advantage of placement new, and replaced the global operator new. However, only the replacement of the global operator new was useful in catching memory leaks in the end, while the other facilities became more or less futile; for memory allocation was becoming more and more implicit with the spread use of STL containers and smart pointers, and direct use of new was discouraged. It is even more so today, with many coding conventions basically banning the use of new and delete. So I would like to revisit this topic.

First, there is still need to trace memory usage, even though memory leakage, in the way of unmatched new, is unlikely today. People still need to know how memory is used, by which parts of the program, in which functions and modules, and so on. The exact point of memory allocation is becoming less relevant, as memory allocation is becoming less direct. It probably occurs in some libraries, instead of in application code. So now tracing memory usage means recording the usage context, instead of the exact code position.

Be Contextual

Usage contexts can be set up in a stack-like data structure, and I have done so several times in the past. What needs to be recorded in the context is something one needs to decide beforehand. If you only want to trace memory usage, you can do as I do below. But you may want to fit the interface with your specific memory manager, adding what needs to be passed to it. Anyway, you decide what should be in. My example code is as follows:

struct context {
    const char* file;
    const char* func;
};

We want to record the context automatically, and RAII can be used for this purpose:

class checkpoint {
public:
    explicit checkpoint(const context& ctx);
    ~checkpoint();

private:
    const context ctx_;
};

#define CTX_MEMORY_CHECKPOINT()       \
    checkpoint memory_checkpoint{     \
        context{__FILE__, __func__}}

thread_local std::deque<context>
    context_stack{
        context{"<UNKNOWN>", "<UNKNOWN>"}};

void save_context(const context& ctx)
{
    context_stack.push_back(ctx);
}

void restore_context(const context& ctx)
{
    assert(!context_stack.empty() &&
           context_stack.back() == ctx);
    context_stack.pop_back();
}

const context& get_current_context()
{
    assert(!context_stack.empty());
    return context_stack.back();
}

checkpoint::checkpoint(const context& ctx) : ctx_(ctx)
{
    save_context(ctx);
}

checkpoint::~checkpoint()
{
    restore_context(ctx_);
}

Please notice that the context_stack needs to be thread_local, which was something not standardized when I last wrote about tracing memory usage. It is very convenient to save the information on a per-stack basis.

Fitting with Real Memory Managers

Before we go define the operator new and operator delete functions (normally called ‘allocation’ and ‘deallocation’ functions, and there are a lot of them), let us define first the generic/sample functions that do the real allocation and deallocation. We just pass on the necessary arguments to the system memory manager here (but you may want to do more to make it work with an existing memory manager) and we still use the C convention that a memory allocation failure is indicated by a null pointer:

void* ctx_alloc(size_t size, size_t alignment,
                const context* /*unused*/)
{
#ifdef _WIN32
    return _aligned_malloc(size, alignment);
#elif defined(__unix) || defined(__unix__)
    void* memptr{};
    int result = posix_memalign(&memptr, alignment, size);
    if (result == 0) {
        return memptr;
    } else {
        return nullptr;
    }
#else
    // No alignment guarantees on other platforms
    (void)alignment;
    return malloc(size);
#endif
}

void ctx_free(void* ptr, const context* /*unused*/)
{
#ifdef _WIN32
    _aligned_free(ptr);
#else
    free(ptr);
#endif
}

operator new & operator delete

Now we can go to the allocation and deallocation functions. The declarations with our new context parameter are the following:

void* operator new  (size_t size,
                     const context& ctx);
void* operator new[](size_t size,
                     const context& ctx);
void* operator new  (size_t size,
                     std::align_val_t align_val,
                     const context& ctx);
void* operator new[](size_t size,
                     std::align_val_t align_val,
                     const context& ctx);

void operator delete  (void* ptr,
                       const context&) noexcept;
void operator delete[](void* ptr,
                       const context&) noexcept;
void operator delete  (void* ptr,
                       std::align_val_t align_val,
                       const context&) noexcept;
void operator delete[](void* ptr,
                       std::align_val_t align_val,
                       const context&) noexcept;

But these are not all the function we need to rewrite. We need to replace the non-contextual version too, and they are actually key to the memory tracing functionality. Our saved context can be used, like the following:

void* operator new(size_t size)
{
    return operator new(size, get_current_context());
}

void* operator new[](size_t size)
{
    return operator new[](size, get_current_context());
}

Assuming the existence of an alloc_mem and a free_mem function, we can make the rest of the allocation and deallocation functions basically forwarders:

void* operator new(size_t size,
                   const std::nothrow_t&) noexcept
{
    return alloc_mem(size, get_current_context(),
                     alloc_is_not_array);
}

void* operator new[](size_t size,
                     const std::nothrow_t&) noexcept
{
    return alloc_mem(size, get_current_context(),
                     alloc_is_array);
}

void* operator new(size_t size,
                   std::align_val_t align_val)
{
    return operator new(size, align_val,
                        get_current_context());
}

void* operator new[](size_t size,
                     std::align_val_t align_val)
{
    return operator new[](size, align_val,
                          get_current_context());
}

void* operator new(size_t size,
                   std::align_val_t align_val,
                   const std::nothrow_t&) noexcept
{
    return alloc_mem(size, get_current_context(),
                     alloc_is_not_array,
                     size_t(align_val));
}

void* operator new[](size_t size,
                     std::align_val_t align_val,
                     const std::nothrow_t&) noexcept
{
    return alloc_mem(size, get_current_context(),
                     alloc_is_array,
                     size_t(align_val));
}

void* operator new(size_t size, const context& ctx)
{
    void* ptr = alloc_mem(size, ctx, alloc_is_not_array);
    if (ptr == nullptr) {
        throw std::bad_alloc();
    }
    return ptr;
}

void* operator new[](size_t size, const context& ctx)
{
    void* ptr = alloc_mem(size, ctx, alloc_is_array);
    if (ptr == nullptr) {
        throw std::bad_alloc();
    }
    return ptr;
}

void* operator new(size_t size,
                   std::align_val_t align_val,
                   const context& ctx)
{
    void* ptr = alloc_mem(size, ctx, alloc_is_not_array,
                          size_t(align_val));
    if (ptr == nullptr) {
        throw std::bad_alloc();
    }
    return ptr;
}

void* operator new[](size_t size,
                     std::align_val_t align_val,
                     const context& ctx)
{
    void* ptr = alloc_mem(size, ctx, alloc_is_array,
                          size_t(align_val));
    if (ptr == nullptr) {
        throw std::bad_alloc();
    }
    return ptr;
}

void operator delete(void* ptr) noexcept
{
    free_mem(ptr, alloc_is_not_array);
}

void operator delete[](void* ptr) noexcept
{
    free_mem(ptr, alloc_is_array);
}

void operator delete(void* ptr, size_t) noexcept
{
    free_mem(ptr, alloc_is_not_array);
}

void operator delete[](void* ptr, size_t) noexcept
{
    free_mem(ptr, alloc_is_array);
}

void operator delete(
    void* ptr, std::align_val_t align_val) noexcept
{
    free_mem(ptr, alloc_is_not_array,
                 size_t(align_val));
}

void operator delete[](
    void* ptr, std::align_val_t align_val) noexcept
{
    free_mem(ptr, alloc_is_array,
                 size_t(align_val));
}

void operator delete(void* ptr,
                     const context&) noexcept
{
    operator delete(ptr);
}

void operator delete[](void* ptr,
                       const context&) noexcept
{
    operator delete[](ptr);
}

void operator delete(void* ptr,
                     std::align_val_t align_val,
                     const context&) noexcept
{
    operator delete(ptr, align_val);
}

void operator delete[](void* ptr,
                       std::align_val_t align_val,
                       const context&) noexcept
{
    operator delete[](ptr, align_val);
}

Contexts and Allocation/Deallocation

Now let us focus on the two functions that do the real job:

enum is_array_t : uint32_t {
    alloc_is_not_array,
    alloc_is_array
};

void* alloc_mem(size_t size,
                const context& ctx,
                is_array_t is_array,
                size_t alignment);

void free_mem(void* ptr,
              is_array_t is_array,
              size_t alignment);

Considering this interface, the context information can only be stored immediately before the memory returned to the user. In order to trace leaked memory, we need to link the allocated memory into a linked list, and the control block is as follows:

struct new_ptr_list_t {
    new_ptr_list_t* next;
    new_ptr_list_t* prev;
    size_t          size;
    context         ctx;
    uint32_t        head_size : 31;
    uint32_t        is_array : 1;
    uint32_t        magic;
};

The first four fields should be very clear in meaning. head_size probably requires some explanation. While the struct is fixed in size, alignments can be different across allocations, resulting in different offsets from the struct pointer to the memory pointer the user gets. So this fields records the aligned struct size. is_array records whether the allocation is done by an operator new[]; we use this piece of information to detect the new[]/delete or new/delete[] mismatch, as well as allowing for special offsets required by array allocations. magic is used to mark that the memory is allocated by this implementation so that when freeing the memory we can detect corrupt memory, double freeing, and suchlike.

We also need the list head of control blocks, a mutex to protect its access, a function to align data size, and the magic number constant:

new_ptr_list_t new_ptr_list = {
    &new_ptr_list, &new_ptr_list, 0, {},
    alloc_is_not_array, 0, CTX_MAGIC};

std::mutex new_ptr_lock;

constexpr uint32_t align(size_t s, size_t alignment)
{
    return static_cast<uint32_t>((s + alignment - 1) &
                                 ~(alignment - 1));
}

constexpr uint32_t CTX_MAGIC = 0x4D585443; // "CTXM"

alloc_mem is then quite straightforward:

void* alloc_mem(size_t size, const context& ctx,
                is_array_t is_array,
                size_t alignment =
                    __STDCPP_DEFAULT_NEW_ALIGNMENT__)
{
    assert(alignment >=
           __STDCPP_DEFAULT_NEW_ALIGNMENT__);

    uint32_t aligned_list_item_size =
        align(sizeof(new_ptr_list_t), alignment);
    size_t s = size + aligned_list_item_size;
    auto ptr = static_cast<new_ptr_list_t*>(
        ctx_alloc(s, alignment, ctx));
    if (ptr == nullptr) {
        return nullptr;
    }
    auto usr_ptr = reinterpret_cast<char*>(ptr) +
                   aligned_list_item_size;
    ptr->ctx = ctx;
    ptr->is_array = is_array;
    ptr->size = size;
    ptr->head_size = aligned_list_item_size;
    ptr->magic = CTX_MAGIC;
    {
        std::lock_guard guard{new_ptr_lock};
        ptr->prev = new_ptr_list.prev;
        ptr->next = &new_ptr_list;
        new_ptr_list.prev->next = ptr;
        new_ptr_list.prev = ptr;
    }
    return usr_ptr;
}

I.e. it does the following things:

  1. Allocates memory enough to satisfy the user requirement and the additional metadata (new_ptr_list_t)
  2. Fills in the metadata
  3. Chains the allocated memory blocks into a list
  4. Returns the pointer after the metadata

free_mem does the opposite thing. Apparently, we need a function to convert the user pointer back to the originally allocated pointer, which is not really trivial, considering the potential cases of bad pointer and unmatched use of array and non-array versions of new and delete. It is the convert_user_ptr function:

new_ptr_list_t* convert_user_ptr(void* usr_ptr,
                                 size_t alignment)
{
    auto offset = static_cast<char*>(usr_ptr) -
                  static_cast<char*>(nullptr);
    auto adjusted_ptr = static_cast<char*>(usr_ptr);
    bool is_adjusted = false;

    // Check alignment first
    if (offset % alignment != 0) {
        offset -= sizeof(size_t);
        if (offset % alignment != 0) {
            return nullptr;
        }
        // Likely caused by new[] followed by delete, if
        // we arrive here
        adjusted_ptr = static_cast<char*>(usr_ptr) -
                       sizeof(size_t);
        is_adjusted = true;
    }
    auto ptr = reinterpret_cast<new_ptr_list_t*>(
        adjusted_ptr -
        align(sizeof(new_ptr_list_t), alignment));
    if (ptr->magic == CTX_MAGIC &&
        (!is_adjusted || ptr->is_array)) {
        return ptr;
    }

    if (!is_adjusted && alignment > sizeof(size_t)) {
        // Again, likely caused by new[] followed by
        // delete, as aligned new[] allocates alignment
        // extra space for the array size.
        ptr = reinterpret_cast<new_ptr_list_t*>(
            reinterpret_cast<char*>(ptr) - alignment);
        is_adjusted = true;
    }
    if (ptr->magic == CTX_MAGIC &&
        (!is_adjusted || ptr->is_array)) {
        return ptr;
    }

    return nullptr;
}

With this, free_mem then becomes easy:

void free_mem(void* usr_ptr, is_array_t is_array,
              size_t alignment =
                  __STDCPP_DEFAULT_NEW_ALIGNMENT__)
{
    assert(alignment >=
           __STDCPP_DEFAULT_NEW_ALIGNMENT__);
    if (usr_ptr == nullptr) {
        return;
    }

    auto ptr = convert_user_ptr(usr_ptr, alignment);
    if (ptr == nullptr) {
        fprintf(stderr,
                "delete%s: invalid pointer %p\n",
                is_array ? "[]" : "", usr_ptr);
        abort();
    }
    if (is_array != ptr->is_array) {
        const char* msg = is_array
                              ? "delete[] after new"
                              : "delete after new[]";
        fprintf(stderr,
                "%s: pointer %p (size %zu)\n",
                msg, usr_ptr, ptr->size);
        abort();
    }
    {
        std::lock_guard guard{new_ptr_lock};
        ptr->magic = 0;
        ptr->prev->next = ptr->next;
        ptr->next->prev = ptr->prev;
    }
    ctx_free(ptr, &(ptr->ctx));
}

I.e.:

  1. It invokes convert_user_ptr to convert the user-provided pointer to a new_ptr_list_t*.
  2. It checks whether array-ness matches in the memory allocation and deallocation.
  3. It unlinks the memory block from the linked list.
  4. If anything bad happens, it prints a message and aborts the whole program (as the program already has undefined behaviour).

One More Thing

It is now nearly complete: we have set up the mechanisms to record memory contexts in memory allocation and deallocation functions. However, I have omitted one important detail so far. If you used my code verbatim as above, the program would crash on first memory allocation. When the global allocation and deallocation functions are replaced, care must be taken when we need additional memory inside those functions. If we somehow use the generic C++ memory allocation mechanisms, it will invoke operator new in the end, causing an infinite recursion. It is still OK to use malloc/free, so we need to use a malloc_allocator for the context stack:

template <typename T>
struct malloc_allocator {
    typedef T value_type;

    typedef std::true_type is_always_equal;
    typedef std::true_type
        propagate_on_container_move_assignment;

    malloc_allocator() = default;
    template <typename U>
    malloc_allocator(const malloc_allocator<U>&) {}

    template <typename U>
    struct rebind {
        typedef malloc_allocator<U> other;
    };

    T* allocate(size_t n)
    {
        return static_cast<T*>(malloc(n * sizeof(T)));
    }
    void deallocate(T* p, size_t)
    {
        free(p);
    }
};

thread_local std::deque<context,
                        malloc_allocator<context>>
    context_stack{context{"<UNKNOWN>", "<UNKNOWN>"}};

Everything Put Together

You can find the real code with more details and a working memory leak checker in this repository:

https://github.com/adah1972/nvwa/tree/master/nvwa

You need to add the root directory of Nvwa to your include path, and nvwa/memory_trace.cpp and nvwa/aligned_memory.cpp to your project. In order to add a new memory checkpoint, use the macro NVWA_MEMORY_CHECKPOINT (Nvwa macros are usually prefixed with ‘NVWA_’). A very short test program follows:

#include <nvwa/memory_trace.h>

int main()
{
    char* ptr1 = new char[20];
    NVWA_MEMORY_CHECKPOINT();
    char* ptr2 = new char[42];
}

The output would be like the following:

Leaked object at 0x57697e30 (size 20, context: <UNKNOWN>/<UNKNOWN>)
Leaked object at 0x57697e70 (size 42, context: test.cpp/int main())
*** 2 leaks found

Notes about Using IWYU on macOS

I have recently found IWYU, a very useful tool to identify whether you have included header files correctly. It can be cleanly installed in Ubuntu by apt, though some configuration is needed to make it identify problems more correctly, i.e., let it know that a header file is private and we should use a public header file that includes it, or a symbol should be defined in a certain header file and we should not care where it is really defined in the implementation.

I did encounter some problems in macOS. I installed it, but it had problems with Xcode header files, causing something like “fatal error: ‘stdarg.h’ file not found” when used. A quick search showed that it was a known problem, and people mentioned that it seemed a problem that deteriorated with more recent versions of LLVM, which IWYU used internally.

I happened to have LLVM 7.0 installed from Homebrew, so I had a try. Here are the simple steps:

  1. Make sure you have llvm@7. If not, ‘brew install llvm@7’ would do.
  2. Check out IWYU to a directory.
  3. Execute ‘git checkout clang_7.0’ inside the IWYU directory to choose the Clang 7 branch.
  4. Execute ‘mkdir build && cd build’ to use a build directory.
  5. Execute ‘CC=/usr/local/opt/llvm@7/bin/clang CXX=/usr/local/opt/llvm@7/bin/clang++ cmake -DCMAKE_PREFIX_PATH=/usr/local/opt/llvm@7 ..’ to configure IWYU to use the Homebrew LLVM 7.0.
  6. Execute ‘make’ to build IWYU.
  7. Execute ‘mkdir -k lib/clang/7.1.0 && ln -s /usr/local/opt/llvm@7/lib/clang/7.1.0/include lib/clang/7.1.0/’ to symlink the Clang 7 include directory inside IWYU. This step is critical to solve the “file not found” problem, but regretfully it does not work with a more recent LLVM version like LLVM 11.
  8. Symlink executables to your bin directory for quick access. Something like:
cd ~/bin
ln -s ~/code/include-what-you-use/build/bin/include-what-you-use .
ln -s ~/code/include-what-you-use/iwyu_tool.py iwyu_tool
ln -s include-what-you-use iwyu

Now IWYU is ready to use.

Please be aware that IWYU does not often work out of the box, and some configuration is needed. The key document to read is IWYU Mappings, and the bundled mapping files (.imp) can be good examples. You probably want to use libcxx.imp as a start. Some mappings are already included by default, and you can find them in the file iwyu_include_picker.cc.

While it is not perfect, it did help me identify many inclusion issues. This commit is a result of using IWYU.

Happy hacking!

The MB Confusion

Every software developer thinks they know what MB means. It is, of course, 1,048,576 bytes. Only the hard drive vendors disagree.

How about a normal computer user? You’ll probably agree that they do not know for sure. It does not matter, anyway, as long as they know that 100 MB is greater than 90 MB. Right?

Let me ask you now, how many bytes are there in a 1.44 MB floppy disk?

You’ll probably be frustrated by the fact that 1.44 × 1024 × 1024 is not an integral value. The fact is, 1.44 MB is a misnomer: it is actually 1440 KB.

Again, confusions are from storage vendors. Or, are they?

In fact, only the semiconductor industry favours powers of two. The only thing related to powers of two in the storage industry is that a sector is 512 bytes by convention—so ‘1.44 MB’ is actually 2880 sectors, as such a floppy disk has 2 sides, 80 tracks per side, and 18 sectors per track. All the other numbers have no relationship with powers of two. So it is natural that the storage industry now reports drive capacity in decimal MBs, GBs, and TBs.

In order to mitigate the confusion, IEC 60027-2 introduced a series of binary unit prefixes in 1999:

  • Ki- or kibi-, 210
  • Mi- or mebi-, 220
  • Gi- or gibi-, 230
  • Ti- or tebi-, 240

So instead of saying a memory page is 4 KB (kilobytes), we should really say 4 KiB (kibibytes). The only problem is that more than 20 years after its introduction and more than 10 years after the ISO standardization (ISO/IEC 80000-2 in 2008), these prefixes are still not popular. The situation is so bad that Wikipedia explicitly discourages their use in its Manual of Style. The reason is very practical: most Wikipedia readers are not familiar with the IEC binary prefixes. So instead of using terms like mebibytes, Wikipedia recommends using the more common prefixes, and asks the authors to ‘explicitly specify the meaning of k and K as well as the primary meaning of M, G, T, etc. in an article’.

Anyway, when we talk about RAM, there is no real confusion, as we always use binary powers, and 4 kB, 4 KB, and 4 KiB all mean 4096 bytes in most cases. When we talk about frequency or bandwidth, we always use decimal powers, and people will not misunderstand what 4 GHz or 100 Mbps means. The only place where there are a lot of confusions is storage. We have seen 1.44 MB is neither binary nor decimal. We should also be aware that different OSs/tools use different conventions. While Microsoft Windows always sticks to the binary notation with units like KB, MB, and GB, Linux and GNU core utilities have begun to use the IEC binary prefixes, and macOS has been using the SI decimal prefixes (1 GB = 1,000,000,000 bytes) since 2009 (Snow Leopard).

While I am not sure I will switch to representing 1,048,576 bytes as 1 MiB when talking about RAM usage, I am pretty sure I will not report a 123,456,789-byte file as 117.7 MB ever again—123.5 MB seems much more simple, natural, and correct.

What will be your choice?

P.S. For a more in-depth coverage on this topic, check out the Wikipedia article binary prefix.

P.P.S. By the way, did you notice that I had a number inconsistency in the very first sentence? If not, it is evidence that we can get rid of ‘he or she’. For more details, check out On the Use of She as a Generic Pronoun.

Enum Filter

I have recently encountered code that is structurally similar to the following:

enum class number {
    zero,
    one,
    two,
    three,
    four,
    five,
    six,
    seven,
    end
};

…

if (value == number::two ||
    value == number::three ||
    value == number::five ||
    value == number::seven) {
    …
}

The manual comparisons do not look good to me, as it is repetitive, error-prone, and not expressing the intent. So the natural questions comes: How can we make the code ‘better’?

While this is a fake example, I hope you can see the point that enumerators have specific properties (which I will call ‘traits’ in this article, as per C++ traditions), and I want the code to show the intent as expressed by traits.

However, let us get rid of ‘value ==’ first. Any repetitions are bad, right?

My first take is something as follows:

template <typename T>
bool is_in(const T& value,
           std::initializer_list<T> value_list)
{
    for (const auto& item : value_list) {
        if (value == item) {
            return true;
        }
    }
    return false;
}

Very simply and straightforward, but not good enough. How can we generate the list, given some criteria?

If you are familiar with the concept of template metaprogramming, you know that this is a compile-time programming topic: compile-time filtering.

In order to filter on the enumerators, we need to describe them with traits. The following code could be good enough for our current purpose:

template <number n>
struct number_traits;

template <>
struct number_traits<number::zero> {
    constexpr bool is_prime = false;
}

template <>
struct number_traits<number::one> {
    constexpr bool is_prime = false;
}

template <>
struct number_traits<number::two> {
    constexpr bool is_prime = true;
}

template <>
struct number_traits<number::three> {
    constexpr bool is_prime = true;
}

template <>
struct number_traits<number::four> {
    constexpr bool is_prime = false;
}

template <>
struct number_traits<number::five> {
    constexpr bool is_prime = true;
}

template <>
struct number_traits<number::six> {
    constexpr bool is_prime = false;
}

template <>
struct number_traits<number::seven> {
    constexpr bool is_prime = true;
}

So, let us try figuring out a way to generate such a list.

After some study, you will know that initializer_list is not fit for such manipulations. tuple is a better utility. The main reason is that we had better manipulate types, instead of values, in template metaprogramming. An initializer_list is not capable of doing that, whereas C++ already has a facility to convert compile-time integral constants into types, its name being exactly integral_constant.

Its approximate definition is as follows, in case you are not familiar with it:

template<class T, T v>
struct integral_constant {
    static constexpr T value = v;
    using value_type = T;
    using type = integral_constant;
    constexpr operator value_type() const noexcept
    {
        return value;
    }
    constexpr value_type operator()() const noexcept
    {
        return value;
    }
};

Such a definition is already provided by the standard library. So, instead of having an initializer_list like {number::two, number::three, number::five}, we would have something like the following:

std::make_tuple(
    std::integral_constant<number, number::two>{},
    std::integral_constant<number, number::three>{},
    std::integral_constant<number, number::five>{})

It would be safe to pass such ‘arguments’ for compile-time programming, as only their types matter. We would not need their values, as each type has exactly one unique value.

The next questions are:

  1. How can we generate the constants for all possible enumerators?—I.e. compile-time iteration.
  2. How can we filter to get only the values we want?—I.e. compile-time filtering.
  3. How can we check whether a value is equal to one of the constanta we present?—I.e. (compile-time or run-time) checking like the is_in above.

The answer to the first question is that we need to generate a sequence, and we need to know what the last enumerator is. As far as I know, there is currently no way in C++ to enumerate all the enumerators of an enum type. I have to resort to an agreement to mark the end of a continuous enumeration, and my choice is that we use end to mark the end, as in the enum class listed in the very beginning of this article. That is, I need to generate the sequence from integral_constant<number, number{0}> to integral_constant<number, number::end>, exclusive.

This job can be easily done with the following code, using the standard tuple and index_sequence technique:

template <typename E, size_t... ints>
constexpr auto make_all_enum_consts_impl(
    std::index_sequence<int...>)
{
    return std::make_tuple(std::integral_constant<
        E, E(ints)>{}...);
}

template <typename E>
constexpr auto make_all_enum_consts()
{
    return make_all_enum_consts_impl<E>(
        std::make_index_sequence<size_t(E::end)>{});
}

Now we have come to the second, and the really difficult, part: how can we filter the values to get only those we need?

The answer is use apply, tuple_cat, and conditional_t, three important tools in the C++ template metaprogramming world:

  • With apply, we can call a function with all elements of a tuple as arguments. I.e. apply(f, make_tuple(42, "answer")) would be equivalent to f(42, "answer").
  • With tuple_cat, we can concatenate elements of tuples into a new tuple. I.e. tuple_cat(make_tuple(42, "answer"), make_tuple("of", "everything")) would result in the tuple {42, "answer", "of", "everything"}.
  • With conditional_t, we can get one of the given types based on a compile-time Boolean expression. I.e. conditional_t<true, int, string> would result in int, but conditional_t<false, int, string> would result in string.

Each tool may look trivial individually, but they can be combined together to work wonders. Specifically, it can do what we now need.

This is the final form I use (mainly inspired by this Stack Overflow answer):

#define ENUM_FILTER_FROM(E, T, tup)                     \
    std::apply(                                         \
        [](auto... ts) {                                \
            return std::tuple_cat(                      \
                std::conditional_t<                     \
                    E##_traits<decltype(ts)::value>::T, \
                    std::tuple<decltype(ts)>,           \
                    std::tuple<>>{}...);                \
        },                                              \
        tup)

Let me explain what it does:

  • The macro takes an enumeration type, a trait name, and a tuple of enumerator constants, which are created by make_all_enum_consts above. The reason why a tuple of constants are used is that the result of calling ENUM_FILTER_FROM can be filtered again.
  • std::apply invokes the generic lambda with the tuple of arguments
  • The generic lambda does the compile-time computation of concatenating (tuple_cat) the arguments into a new tuple
  • The arguments of tuple_cat is either a tuple of one enumerator constant, if the type satisfies the trait, or an empty tuple otherwise
  • So the end result of executing the code in the macro is a tuple of enumerator constants that satisfy the trait

The answer to the third question is relatively simple. For maximal flexibility, I am splitting it into two steps:

  • Convert the tuple of types into a tuple of values
  • Check whether a value is in the tuple with a fold expression

Here is the code:

template <typename Tuple, size_t... ints>
constexpr auto make_values_from_consts_impl(
    Tuple tup, std::index_sequence<ints...>)
{
    return std::make_tuple(std::get<ints>(tup)()...);
}

template <typename Tuple>
constexpr auto make_values_from_consts(Tuple tup)
{
    return make_values_from_consts_impl(
        tup, std::make_index_sequence<
                 std::tuple_size_v<Tuple>>{});
}

template <typename T, typename Tuple, size_t... ints>
constexpr bool is_in_impl(const T& value,
                          const Tuple& tup,
                          std::index_sequence<ints...>)
{
    return ((value == std::get<ints>(tup)) || ...);
}

template <typename T, typename Tuple>
constexpr std::enable_if_t<
    std::is_same_v<T, std::decay_t<decltype(std::get<0>(
                          std::declval<Tuple>()))>>,
    bool>
is_in(const T& value, const Tuple& tup)
{
    return is_in_impl(value, tup,
                      std::make_index_sequence<
                          std::tuple_size_v<Tuple>>{});
}

Finally, we can define the function is_prime:

constexpr bool is_prime(number n)
{
    return is_in(
        n, make_values_from_consts(ENUM_FILTER_FROM(
               number, is_prime,
               make_all_enum_consts<number>())));
}

More interestingly, the result of invoking ENUM_FILTER_FROM can be passed to ENUM_FILTER_FROM again. If we defined the trait is_even as well as is_prime, we would be able to write:

ENUM_FILTER_FROM(number, is_even, \
    ENUM_FILTER_FROM(number, is_prime, …)

Is that nice?

Do note that there is an asymmetry here. It is trivial to implement make_values_from_consts, but it seems impossible to implement its inverse constexpr function make_consts_from_values. This is because there are no constexpr arguments in C++ (check out this discussion if you are interested in the reasons). No arguments are regarded constexpr, even in a constexpr function. You can work around the problem in a cumbersome way, but for this post I am sticking to using types as long as possible.

That’s it, my experience of using compile-time filtering. I have found the techniques presented here useful, and I wish you would find them useful too.

Time Zones in Python

Python datetimes are naïve by default, in that they do not include time zone (or time offset) information. E.g. one might be surprised to find that (datetime.now() - datetime.utcnow()).total_seconds() is basically the local time offset (28800 in my case for UTC+08:00). I personally kind of expected a value near zero. This said, datetime is able to handle time zones, but the definitions of time zones are not included in the Python standard library. A third-party library is necessary for handling time zones. In our project, a developer introduced pytz in the beginning. It all looked well, until I found the following:

>>> from datetime import datetime
>>> from pytz import timezone
>>> timezone('Asia/Shanghai')
<DstTzInfo 'Asia/Shanghai' LMT+8:06:00 STD>
>>> (datetime(2017, 6, 1, tzinfo=timezone('Asia/Shanghai'))
...  - datetime(2017, 6, 1, tzinfo=timezone('UTC'))
... ).total_seconds()
-29160.0

Sh*t! Was pytz a joke? The time zone of Shanghai (or China) should be UTC+08:00, and I did not care a bit about its local mean time (I was, of course, expecting -28800 on the last line). What was the author thinking about? Besides, it did not provide a local time zone function, and we had to hardcode our time zone to 'Asia/Shanghai', which was ugly.—Disappointed, I searched for an alternative, and I found dateutil.tz. From then on, I routinely use code like the following:

from datetime import datetime
from dateutil.tz import tzlocal, tzutc
…
datetime.now(tzlocal())  # for local time
datetime.now(tzutc())    # for UTC time

When answering a StackOverflow question, I realized I misunderstood pytz. I still thought it had some bad design decisions; however, it would have been able to achieve everything I needed, if I had read its manual carefully (I cannot help remembering the famous acronym ‘RTFM’). It was explicitly mentioned in the manual that passing a pytz time zone to the datetime constructor (as I did above) ‘“does not work” with pytz for many timezones’. One has to use the pytz localize method or the standard astimezone method of datetime.

As tzlocal and tzutc from dateutil.tz fulfilled all my needs and were easy to use, I continued to use them. The fact that I got a few downvotes on StackOverflow certainly did not make me like pytz better.


When introducing apscheduler to our project, we noticed that it required that the time zone be provided by pytz—it ruled out the use of dateutil.tz. I wondered what was special about it. I also became aware of a Python package called tzlocal, which was able to provide a pytz time zone conforming to the local system settings. More searching and reading revealed facts that I had missed so far:

  • The Python datetime object does not store or handle daylight-saving status. Adding a timedelta to it does not alter its time zone information, and can result in an invalid local time (say, adding one day to the last day of daylight-saving time does not result in a datetime in standard time).
  • The time zone provided by dateutil.tz does not handle all corner cases. E.g. it does not know that Russia observed all-year daylight-saving time from 2012 to 2014, and it does not know that China observed daylight-saving time from 1986 to 1991.
  • The pytz localize and normalize methods can handle all these complexities, and this is partly the reason why pytz requires people to use its localize method instead of passing the time zone to datetime.

So pytz can actually do more, and correctly. I can do things like finding out in which years China observed daylight-saving time:

from datetime import datetime, timedelta
from pytz import timezone
china = timezone('Asia/Shanghai')
utc = timezone('UTC')
expect_diff = timedelta(hours=8)
for year in range(1980, 2000):
    dt = datetime(year, 6, 1)
    if utc.localize(dt) - china.localize(dt) != expect_diff:
        print(year)

It is now clear to me that the pytz-style time zone is necessary when apscheduler handles a past or future local time.


A few benchmarks regarding the related functions in ipython (not that they are very important):

from datetime import datetime
import dateutil.tz
import pytz
import tzlocal
dateutil_utc = dateutil.tz.tzutc()
dateutil_local = dateutil.tz.tzlocal()
pytz_utc = pytz.utc
pytz_local = tzlocal.get_localzone()
%timeit datetime.utcnow()
310 ns ± 0.405 ns per loop (mean ± std. dev. of 7 runs, 1000000 loops each)
%timeit datetime.now()
745 ns ± 1.65 ns per loop (mean ± std. dev. of 7 runs, 1000000 loops each)
%timeit datetime.now(dateutil_utc)
924 ns ± 0.907 ns per loop (mean ± std. dev. of 7 runs, 1000000 loops each)
%timeit datetime.now(pytz_utc)
2.28 µs ± 18.4 ns per loop (mean ± std. dev. of 7 runs, 100000 loops each)
%timeit datetime.now(dateutil_local)
17.4 µs ± 29.6 ns per loop (mean ± std. dev. of 7 runs, 100000 loops each)
%timeit datetime.now(pytz_local)
5.54 µs ± 11.8 ns per loop (mean ± std. dev. of 7 runs, 100000 loops each)

My final recommendations:

  • One should consider using naïve UTC everywhere, as they are easy and fast to work with.
  • The next best is using offset-aware UTC. Both dateutil.tz and pytz can be used in this case without any problems.
  • In all other cases, pytz (as well as tzlocal) is preferred, but one should beware of the peculiar behaviour of pytz time zones.

阅读的权利

作者:理查德 · 斯托曼

本文发表在 1997 年二月号的《计算机协会通信》(第 40 卷,第 2 期)。


(摘自《第谷之路》,关于月亮革命先驱者的文集,2096 年于月亮城出版。)

对于丹 · 哈尔伯特来说,第谷之路始于大学——就在丽莎 · 兰兹向他借计算机的时候。她的计算机坏了。如果她不能另外借到一台的话,期中作业就肯定会不及格。除了丹,她可不敢向任何人开口。

这可就让丹为难了。他肯定得帮她一把——不过,如果他把计算机借给她,她可能会读他的书。光这么想就让他大吃一惊,更不要说这实际算是犯罪行为了。如果你让别人读你的书的话,你会进监狱,被关上许多年!如同其他每个人,他从小学开始就被谆谆教诲,分享书籍是卑鄙和错误的行为——只有盗版者才会这么做。

不仅如此,SPA——软件保护局的缩写——很可能会抓到他。在他的软件课里,丹学到过,每本书都有一个版权监视器,会将何时、何地、何人阅读的信息报告到中央许可处。(他们利用这些信息来抓获盗版阅读者,但也会利用其把个人的兴趣资料卖给零售商。)下次他的计算机联网时,中央许可处就会发现他做了什么。他,作为计算机的所有者,就会受到最严厉的惩罚——因为他没有竭力防止犯罪。

当然,丽莎并不一定有意要读他的书。她可能只是要用他的计算机来完成她的期中作业。不过,丹知道她出身于中产阶级家庭,承担学费都很困难,更不要说阅读费了。读他的书,可能是她能够毕业的唯一办法。他了解这种情况——他自己都不得不靠贷款来支付他阅读论文的费用。(这些费用的 10% 归论文的作者所有。因为丹的理想是从事学术工作,他可以寄希望于以后他自己的研究论文带来足够的收入来归还贷款——如果它们被经常引用的话。)

后来,丹会了解到曾有一段时间任何人都可以去图书馆免费阅读杂志里的文章,甚至整本的书。曾有过独立学者,可以读几千页的资料,都不需要政府图书馆的准许。不过,从 1990 年代起,不管是商业还是非营利杂志的出版商,都开始对访问收费。在 2047 年之时,已经很少有人还记得,曾经存在过普通大众可以接触学术文献的图书馆了。

当然,总是有办法可以绕过 SPA 和中央许可处的。只不过这些办法都是非法的。丹软件课上有一个同学,叫法兰克 · 马图琪,曾通过不正当手段获得了调试工具,还用它在读书时跳过版权监视器的代码。不过,这件事情他对朋友宣扬得太多,最终有人为了得到奖金而向 SPA 揭发了他(陷入深深债务中的学生很容易受诱惑而做出背叛行为)。2047 年时,法兰克正在坐牢,不是因为盗版阅读,而是因为拥有一个调试器。

后来,丹还会了解到曾有一段时间任何人都可以拥有调试工具,有些甚至是免费的,放在光盘上,或放在网上供人下载。但是,随着普通用户逐渐开始使用它们来规避版权监视器,最终有一个法官作出判决,规避版权监视已经成为调试器的实际主要用途。这意味着,调试器是非法的;调试器的作者也被关进了监狱。

当然,程序员仍然需要调试工具。在 2047 年时,调试器厂商销售的调试器都有编号,且只对正式许可的签约程序员进行销售。丹在软件课上使用的调试器放在一个特别的防火墙后,只能在课堂练习时使用。

还有一种可能规避版权监视器的方法,就是安装一个修改过的系统内核。最终,丹会发现自由的系统内核,甚至完整的自由操作系统;它们自世纪之交前后就存在了。不过,它们不仅像调试器一样是非法的,而且你即使有的话也没法安装它们——如果你不知道你的计算机的根密码的话。无论是联邦调查局(FBI)还是微软的支持部门都不会告诉你。

丹的结论是,他不能简单地把计算机借给丽莎。但是,他也决不能拒绝帮助她,因为他爱她。每一次与她交谈,他的心中都会充满喜悦。丽莎选择了向他来寻找帮助,意味着她也爱他。

丹做了一件不可思议的事情来解决面前的难题——他不仅把计算机借给了丽莎,还把他的密码也告诉了她。这样,当丽莎阅读他的书籍时,中央许可处会认为是他在阅读。这仍然是犯罪,但 SPA 不会自动发现了。只有丽莎举报他,他们才会发现。

当然,如果学校发现他把密码告诉丽莎的话,那无论她用这密码干过什么,他们俩都完蛋了。学校的政策是,任何妨碍监视学生计算机的行为都将招致纪律惩处。你是否真的做了坏事不重要——让管理员难以对你进行检查就已经是作案了。他们认定,这就意味着你要做不被允许的事情;他们并不需要知道那是什么。

学生通常不会因此被开除——至少不会被直接开除。实际会发生的是,他们将被禁止使用学校的计算机系统,然后不可避免地在所有科目中挂科。

后来,丹还会了解到,这种大学政策在 1980 年代才开始。从那时起,大学生们开始大量使用计算机。此前,大学在学生纪律方面也采取了不同的做法:他们只是对真正有害的行为进行惩罚,而不是对仅仅有疑问的行为。

丽莎没有向 SPA 举报丹。丹帮助她的决定让他们后来走进了婚姻的殿堂,同时也使他们开始质疑他们在孩童时就接受的关于盗版的教导。夫妇俩开始阅读关于版权的历史,关于苏联及其对复印的限制,甚至还有原始的美国宪法。他们搬到了月亮城,并找到了其他逃离了 SPA 的魔爪的人们。当第谷环形山起义于 2062 年发生时,全民阅读权很快就成了起义的中心目标之一。

作者注释

本注释在 2007 年更新过。

阅读的权利在今天仍然是一场进行中的战斗。虽然我们今天的生活方式可能要过 50 年才会被遗忘,上面描述的特定法律和实践中,大部分已经被提出了。很多已经在美国和其它地方成了法律。在美国,1998 年的《数字千年版权法案》(DMCA)建立了对阅读和借阅计算机化的图书(以及其它作品)进行限制的法律基础。欧盟在 2001 年的版权指导书也施加了类似的限制。在法国,根据 2006 通过的《信息社会中的著作权及相关权利法案》(DADVSI),拥有 DeCSS 程序本身(对 DVD 上的视频进行解密的自由软件)就是一种犯罪。

在 2001 年,霍灵斯参议员在迪斯尼的赞助下提出了一项称作 SSSCA 的法案,要求每台新的计算机上都强制安装用户无法绕过的限制复制的设施。紧随「别针」芯片和类似的美国政府密钥托管提案的后尘,这一提案显示了一种长期趋势:计算机系统正在逐渐被设置成给予第三方控制的权力,而不是实际的使用者。SSSCA 后来被更名为 CBDTPA(很难发音),大家把它故意叫成「消费但不要尝试编程法案」。

共和党人不久之后控制了美国参议院。比起民主党人,他们和好莱坞的联系不那么紧密,所以他们没有推进这些提案。现在,民主党人重新掌握了控制权,危险又一次变大了。

2001 年美国开始尝试利用提出的美洲自由贸易区(FTAA)条约来对整个西半球的国家强加同样的规则。FTAA 是一个所谓的「自由贸易」条约,实际上设计成给予企业而非民主政府更大的权利。强加类似于 DMCA 的法律是这种精神的典型表现。巴西总统卢拉拒绝了 DMCA 和其它这样的要求,事实上终止了 FTAA。

自那以后,美国通过双边「自由贸易」协定对澳大利亚和墨西哥等国,还有通过《中美洲自由贸易协定》对哥斯达黎加等国,施加了类似的要求。厄瓜多尔总统科雷亚拒绝签署「自由贸易」协定,但厄瓜多尔在 2003 年采纳了类似于 DMCA 的法律。厄瓜多尔的新宪法也许提供了一个可以除掉这一法律的机会。

故事里有一个设想直到 2002 年才实际发生。这就是 FBI 和微软将持有你的个人计算机的根密码,而你却没有。

这一计划的支持者给该计划起名为「可信任计算」和「Palladium」。我们把它叫做「不可靠计算」,因为该计划的效果是使你的计算机服从其它公司,而非你。在 2007 年,这被实现为 Windows Vista 的一部分;我们认为苹果也会做类似的事情。在这一计划中,生产商将掌握密码,但 FBI 要得到它并不会有什么困难。

微软保有的并不是传统意义上的口令;没有人会在终端上输入它。确切地说,这是一个签名和加密密钥,与你的计算机上存储的第二个密钥相对应。这使得微软,甚至可能是和微软合作的站点,对用户自己的计算机拥有终极控制权。

Vista 给了微软额外的权利。举例来说,微软可以强制安装升级,并可以命令所有运行 Vista 的计算机拒绝运行某一设备驱动程序。Vista 的很多限制的主要目的就是制作用户无法克服的 DRM。

SPA,实际上代表软件出版者联合会,在这一类似于警察的角色上已被 BSA(商业软件联盟)所替代。在今天,它并不是正式的警察:但非正式地,它表现得非常像警察。它诱惑人们告发他们的同事和朋友,使用的方法让人回想起旧日的苏联。在阿根廷,2001 年的一场 BSA 的恐怖运动,暗地里威胁人们共享软件可导致被强奸。

在这个故事最初写出来的时候,SPA 正在威胁小的互联网服务提供商(ISP),要求它们允许 SPA 监控所有的用户。大部分的 ISP 在受威胁后就屈服了,因为它们担负不起在法庭还击的所需的费用。至少一个 ISP,加州奥克兰的 Community ConneXion,拒绝了这一要求,并且真的被起诉了。SPA 后来撤销了这一诉讼,但它们获得了 DMCA,法案给了它们所追寻的权利。

上面描写的大学安全策略也并非想像。比如,在芝加哥地区某大学,当你登录时计算机将印出如下信息:

本系统仅供授权用户使用。对本计算机系统非授权或超出授权的使用可能导致所有行为被系统监控并被系统工作人员记录。在对不正常使用计算机的个人进行监控时,以及在系统维护时,授权用户的行为也有可能被监控。任何使用本系统的用户都明确同意该类监控,并应知晓,如果此类监控揭示出非法行为或违反校规的证据,系统工作人员可能将该监控证据提供给校方相应部门及执法部门官员。

这可真是个有趣的对付第四修正案的方法:迫使基本上每个人提前同意,放弃他们在第四修正案下的权利。

参考资料


译者:吴咏炜

原文:https://www.gnu.org/philosophy/right-to-read.en.html

说明:这是一篇挺久之前翻译的文章。原本的特定用途已经不会再发生,发出来和大家共享。

Note: This is an article I translated quite a few years ago. Its intended usage has ceased to exist, and I am sharing it online. Recent changes at the English site are not reflected in this translation.

This work is free to share under a Creative Commons Attribution-ShareAlike 4.0 Licence.

My Opinions Regarding the Top Five TIOBE Languages

I have written C++ for nearly 30 years. I had been advocating that it was the best language 🤣, until my love moved to Python a few years ago. I will still say C++ is a very powerful and unique language. It is probably the only language that intersects many different software layers. It lets programmers control the bit-level details, and it has the necessary mechanisms to allow programmers to make appropriate abstractions—arguably one of the best as it provides powerful generics, which are becoming better and better with the upcoming concepts and ranges in C++20. It has very decent optimizing compilers, and suitably written C++ code performs better than nearly all other languages. Therefore, C++ has been widely used in not only low-level stuff like drivers, but also libraries and applications, especially where performance is wanted, like scientific computing and games. It is still widely used in desktop applications, say, Microsoft Office and Adobe Photoshop. The power does come with a price: it is probably the most complicated computer language today. Mastering the language takes a long time (and with 30 years’ experience I dare not say I have mastered the language). Generic code also tends to take a long time to compile. Error messages can be overwhelming, especially to novices. I can go on and on, but it is better to stop here, with a note that the complexity and cost are sometimes worthwhile, in exchange for reduced latency and reduced power usage (from CPU and memory).

Python is, on the other hand, easy to learn. It is not a toy language, though: it is handy not only to novices, but also to software veterans like me. The change-and-run cycle is much shorter than C++. Code in Python is very readable, partly because lists, sets, and dictionaries are supported literal types (you cannot write in C++ an expression like {"one": 1} and let compiler deduce it is a dictionary). It has features that C++ has lacked for many years: generator/coroutine, lazy range, and so on. Generics do not need special support, as it is dynamically typed (but it also does not surprise programmers by allowing error-prone expressions like "1" + 2, as in some script languages). With a good IDE, the argument on its lack of compile-time check can be crushed—programmers can enjoy edit-time checks. It has a big ecosystem with a huge number of third-party libraries, and they are easier to take and use than in C++ (thanks to pip). The only main remaining shortcoming to me is performance, but: 1) one may write C/C++ extensions where necessary; and 2) the lack of performance may not matter at all, if your application is not CPU-bound. See my personal experience of 25x performance boost in two hours.

I used Java a long time ago. I do not like it (mostly for its verbosity), and its desktop/server implementation makes it unsuitable for short-time applications due to its sluggish launch time. However, it has always been a workhorse on the server side, and it has a successful ecosystem, if not much harmed by Oracle’s lawyers. Android also brought life to the old language and the development communities (ignoring for now the bad effects Oracle has brought about).

C# started as Microsoft’s answer to Java, but they have differed more and more since then. I actually like C#, and my experience has shown it is very suitable for Windows application development (I do not have experience with Mono, and I don’t do server development on Windows). Many of its features, like LINQ and on-stack structs, are very likeable.

C is a simple and elegant language, and it can be regarded as the ancestor of three languages above (except Python), at least in syntax. It is the most widely supported. It is the closest to metal, and is still very popular in embedded systems, OS development, and cases where maximum portability is wanted (thus the wide offerings from the open-source communities). It is the most dangerous language, as you can easily have buffer overflows. Incidentally, two of the three current answers to ‘How do you store a list of names input by the user into an array in C (not C++ or C#)?’ can have buffer overflows (and I wrote the other answer). Programmers need to tend to many details themselves.

I myself will code everything in Python where possible, as it usually requires the fewest lines of code and takes the least amount of time. If performance is wanted, I’ll go to C++. For Windows GUI applications, I’ll prefer C#. I will write in C if maximum portability and memory efficiency are wanted. I do not feel I will write in Java, except modifying existing code or when the environment supports Java only.

[I first posted it as a Quora answer, but it is probably worth a page of its own.]

25x Performance Boost in Two Hours

Our system has a find_child_regions API, which, as the name indicates, can find subregions of a region up to a certain level. It needs to look up two MongoDB collections, combine the data in a certain structure, and return the result in JSON.

One day, it was reported that the API was slow for big data sets. Tests showed that it took more than 50 seconds to return close to 6000 records. Er . . . that means the average processing speed is only about 100 records a second—not terribly slow, but definitely not ideal.

When there is a performance problem, a profiler is always your friend.1 Profiling quickly revealed that a database read function was called about twice the number of returned records, and occupied the biggest chunk of time. The reason was that the function first found out all the IDs of the regions to return, and then it read all the data and generated the result. Since the data were already read once when the IDs were returned, they could be saved and reused. I had to write a new function, which resembled the function that returned region IDs, but returned objects that contained all the data read instead (we had such a class already). I also needed to split the result-generating function into two, so that either the region IDs, or the data objects, could be accepted. (I could not change these functions directly, as they have many other users than find_child_regions; changing all of them at once would have been both risky and unnecessary.)

In about 30 minutes, this change generated the expected improvement: call time was shortened to about 30 seconds. A good start!

While the improvement percentage looked nice, the absolute time taken was still a bit long. So I continued to look for further optimization chances.

Seeing that database reading was still the bottleneck and the database read function was still called for each record returned, I thought I should try batch reading. Fortunately, I found I only needed to change one function. Basically, I needed to change something like the following

result = []
for x in xs:
    object_id = f(x)
    obj = get_from_db(object_id, …)
    if obj:
        result.append(obj)
return result

to

object_ids = [f(x) for x in xs]
return find_in_db({"_id": {"$in": object_ids}}, …)

I.e. in that specific function, all data of one level of subregions were read in one batch. Getting four levels of subregions took only four database reads, instead of 6000. This reduced the latency significantly.

In 30 minutes, the call time was again reduced, from 30 seconds to 14 seconds. Not bad!

Again, the profiler showed that database reading was still the bottleneck. I made more experiments, and found that the data object could be sizeable, whereas we did not always need all data fields. We might only need, say, 100 bytes from each record, but the average size of each region was more than 50 KB. The functions involved always read the full record, something equivalent to the traditional SQL statement ‘SELECT * FROM ...’. It was convenient, but not efficient. MongoDB APIs provided a projection parameter, which allowed callers to specify which fields to read from the collection, so I tried it. We had the infrastructure in place, and it was not very difficult. It took me about an hour to make it fully work, as many functions needed to be changed to pass the (optional) projection/field names around. When it finally worked, the result was stunning: if one only needed the basic fields about the regions, the call time could be less than 2 seconds. Terrific!

While Python is not a performant language, and I still like C++, I am glad that Python was chosen for this project. The performance improvement by the C++ language would have been negligible when the call time was more than 50 seconds, and still a small number when I improved its performance to less than 2 seconds. In the meanwhile, it would have been simply impossible for me to refactor the code and achieve the same performance in two hours if the code had been written in C++. I highly doubt whether I could have finished the job in a full day. I would probably have been fighting with the compiler and type system most of the time, instead of focusing on the logic and testing.

Life is short—choose your language wisely.


  1. Being able to profile Python programs easily was actually the main reason I purchased a professional licence of PyCharm, instead of just using the Community Edition. 

Pipenv and Relocatable Virtual Environments

Pipenv is a very useful tool to create and maintain independent Python working environments. Using it feels like a breeze. There are enough online tutorials about it, and I will only talk about one specific thing in this article: how to move a virtual environment to another machine.

The reason I need to make virtual environments movable is that our clients do not usually allow direct Internet access in production environments, therefore we cannot install packages from online sources on production servers. They also often enforce a certain directory structure. So we need to prepare the environment in our test environment, and it would be better if we did not need to worry about where we put the result on the production server. Virtual environments, especially with the help of Pipenv, seem to provide a nice and painless way of achieving this effect—if we can just make the result of pipenv install movable, or, in the term of virtualenv, relocatable.

virtualenv is already able to make most of the virtual environment relocatable. When working with Pipenv, it can be as simple as

virtualenv --relocatable `pipenv --venv`

There are two problems, though:

They are not difficult to solve, and we can conquer them one by one.

As pointed out in the issue discussion, one only needs to replace one line in activate to make it relocatable. What is originally

VIRTUAL_ENV="/home/yongwei/.local/share/virtualenvs/something--PD5l8nP"

should be changed to

VIRTUAL_ENV=$(cd $(dirname "$BASH_SOURCE"); dirname `pwd`)

To be on the safe side, I would look for exactly the same line and replace it, so some sed tricks are needed. I also need to take care of the differences between BSD sed and GNU sed, but it is a problem already solved before.

The second problem is even easier. Creating a new relative symlink solves the problem.

I’ll share the final result here, a simple script that can make a virtual environment relocatable, as well as creating a tarball from it. The archive has ‘-venv-platform’ as the suffix, but it does not include a root directory. Keep this in mind when you unpack the tarball.

#!/bin/sh

case $(sed --version 2>&1) in
  *GNU*) sed_i () { sed -i "$@"; };;
  *) sed_i () { sed -i '' "$@"; };;
esac

sed_escape() {
  echo $1|sed -e 's/[]\/$*.^[]/\\&/g'
}

VENV_PATH=`pipenv --venv`
if [ $? -ne 0 ]; then
  exit 1
fi
virtualenv --relocatable "$VENV_PATH"

VENV_PATH_ESC=`sed_escape "$VENV_PATH"`
RUN_PATH=`pwd`
BASE_NAME=`basename "$RUN_PATH"`
PLATFORM=`python -c 'import sys; print(sys.platform)'`
cd "$VENV_PATH"
sed_i "s/^VIRTUAL_ENV=\"$VENV_PATH_ESC\"/VIRTUAL_ENV=\$(cd \$(dirname \"\$BASH_SOURCE\"); dirname \`pwd\`)/" bin/activate
[ -h lib64 ] && rm -f lib64 && ln -s lib lib64
tar cvfz $RUN_PATH/$BASE_NAME-venv-$PLATFORM.tar.gz .

After running the script, I can copy result tarball to another machine of the same OS, unpack it, and then either use the activate script or set the PYTHONPATH environment variable to make my Python program work. Problem solved.

A last note: I have not touched activate.csh and activate.fish, as I do not use them. If you did, you would need to update the script accordingly. That would be your homework as an open-source user. 😼


  1. I tried removing it, and Pipenv was very unhappy.