阅读的权利

作者:理查德 · 斯托曼

本文发表在 1997 年二月号的《计算机协会通信》(第 40 卷,第 2 期)。


(摘自《第谷之路》,关于月亮革命先驱者的文集,2096 年于月亮城出版。)

对于丹 · 哈尔伯特来说,第谷之路始于大学——就在丽莎 · 兰兹向他借计算机的时候。她的计算机坏了。如果她不能另外借到一台的话,期中作业就肯定会不及格。除了丹,她可不敢向任何人开口。

这可就让丹为难了。他肯定得帮她一把——不过,如果他把计算机借给她,她可能会读他的书。光这么想就让他大吃一惊,更不要说这实际算是犯罪行为了。如果你让别人读你的书的话,你会进监狱,被关上许多年!如同其他每个人,他从小学开始就被谆谆教诲,分享书籍是卑鄙和错误的行为——只有盗版者才会这么做。

不仅如此,SPA——软件保护局的缩写——很可能会抓到他。在他的软件课里,丹学到过,每本书都有一个版权监视器,会将何时、何地、何人阅读的信息报告到中央许可处。(他们利用这些信息来抓获盗版阅读者,但也会利用其把个人的兴趣资料卖给零售商。)下次他的计算机联网时,中央许可处就会发现他做了什么。他,作为计算机的所有者,就会受到最严厉的惩罚——因为他没有竭力防止犯罪。

当然,丽莎并不一定有意要读他的书。她可能只是要用他的计算机来完成她的期中作业。不过,丹知道她出身于中产阶级家庭,承担学费都很困难,更不要说阅读费了。读他的书,可能是她能够毕业的唯一办法。他了解这种情况——他自己都不得不靠贷款来支付他阅读论文的费用。(这些费用的 10% 归论文的作者所有。因为丹的理想是从事学术工作,他可以寄希望于以后他自己的研究论文带来足够的收入来归还贷款——如果它们被经常引用的话。)

后来,丹会了解到曾有一段时间任何人都可以去图书馆免费阅读杂志里的文章,甚至整本的书。曾有过独立学者,可以读几千页的资料,都不需要政府图书馆的准许。不过,从 1990 年代起,不管是商业还是非营利杂志的出版商,都开始对访问收费。在 2047 年之时,已经很少有人还记得,曾经存在过普通大众可以接触学术文献的图书馆了。

当然,总是有办法可以绕过 SPA 和中央许可处的。只不过这些办法都是非法的。丹软件课上有一个同学,叫法兰克 · 马图琪,曾通过不正当手段获得了调试工具,还用它在读书时跳过版权监视器的代码。不过,这件事情他对朋友宣扬得太多,最终有人为了得到奖金而向 SPA 揭发了他(陷入深深债务中的学生很容易受诱惑而做出背叛行为)。2047 年时,法兰克正在坐牢,不是因为盗版阅读,而是因为拥有一个调试器。

后来,丹还会了解到曾有一段时间任何人都可以拥有调试工具,有些甚至是免费的,放在光盘上,或放在网上供人下载。但是,随着普通用户逐渐开始使用它们来规避版权监视器,最终有一个法官作出判决,规避版权监视已经成为调试器的实际主要用途。这意味着,调试器是非法的;调试器的作者也被关进了监狱。

当然,程序员仍然需要调试工具。在 2047 年时,调试器厂商销售的调试器都有编号,且只对正式许可的签约程序员进行销售。丹在软件课上使用的调试器放在一个特别的防火墙后,只能在课堂练习时使用。

还有一种可能规避版权监视器的方法,就是安装一个修改过的系统核心。最终,丹会发现自由的核心,甚至完整的自由操作系统;它们自世纪之交前后就存在了。不过,它们不仅像调试器一样是非法的,而且你即使有的话也没法安装它们——如果你不知道你的计算机的根密码的话。无论是联邦调查局(FBI)还是微软的支持部门都不会告诉你。

丹的结论是,他不能简单地把计算机借给丽莎。但是,他也决不能拒绝帮助她,因为他爱她。每一次与她交谈,他的心中都会充满喜悦。丽莎选择了向他来寻找帮助,意味着她也爱他。

丹做了一件不可思议的事情来解决面前的难题——他不仅把计算机借给了丽莎,还把他的密码也告诉了她。这样,当丽莎阅读他的书籍时,中央许可处会认为是他在阅读。这仍然是犯罪,但 SPA 不会自动发现了。只有丽莎举报他,他们才会发现。

当然,如果学校发现他把密码告诉丽莎的话,那无论她用这密码干过什么,他们俩都完蛋了。学校的政策是,任何妨碍监视学生计算机的行为都将招致纪律惩处。你是否真的做了坏事不重要——让管理员难以对你进行检查就已经是作案了。他们认定,这就意味着你要做不被允许的事情;他们并不需要知道那是什么。

学生通常不会因此被开除——至少不会被直接开除。实际会发生的是,他们将被禁止使用学校的计算机系统,然后不可避免地在所有科目中挂科。

后来,丹还会了解到,这种大学政策在 1980 年代才开始。从那时起,大学生们开始大量使用计算机。此前,大学在学生纪律方面也采取了不同的做法:他们只是对真正有害的行为进行惩罚,而不是对仅仅有疑问的行为。

丽莎没有向 SPA 举报丹。丹帮助她的决定让他们后来走进了婚姻的殿堂,同时也使他们开始质疑他们在孩童时就接受的关于盗版的教导。夫妇俩开始阅读关于版权的历史,关于苏联及其对复印的限制,甚至还有原始的美国宪法。他们搬到了月亮城,并找到了其他逃离了 SPA 的魔爪的人们。当第谷环形山起义于 2062 年发生时,全民阅读权很快就成了起义的中心目标之一。

作者注释

本注释在 2007 年更新过。

阅读的权利在今天仍然是一场进行中的战斗。虽然我们今天的生活方式可能要过 50 年才会被遗忘,上面描述的特定法律和实践中,大部分已经被提出了。很多已经在美国和其它地方成了法律。在美国,1998 年的《数字千年版权法案》(DMCA)建立了对阅读和借阅计算机化的图书(以及其它作品)进行限制的法律基础。欧盟在 2001 年的版权指导书也施加了类似的限制。在法国,根据 2006 通过的《信息社会中的著作权及相关权利法案》(DADVSI),拥有 DeCSS 程序本身(对 DVD 上的视频进行解密的自由软件)就是一种犯罪。

在 2001 年,霍灵斯参议员在迪斯尼的赞助下提出了一项称作 SSSCA 的法案,要求每台新的计算机上都强制安装用户无法绕过的限制复制的设施。紧随「别针」芯片和类似的美国政府密钥托管提案的后尘,这一提案显示了一种长期趋势:计算机系统正在逐渐被设置成给予第三方控制的权力,而不是实际的使用者。SSSCA 后来被更名为 CBDTPA(很难发音),大家把它故意叫成「消费但不要尝试编程法案」。

共和党人不久之后控制了美国参议院。比起民主党人,他们和好莱坞的联系不那么紧密,所以他们没有推进这些提案。现在,民主党人重新掌握了控制权,危险又一次变大了。

2001 年美国开始尝试利用提出的美洲自由贸易区(FTAA)条约来对整个西半球的国家强加同样的规则。FTAA 是一个所谓的「自由贸易」条约,实际上设计成给予企业而非民主政府更大的权利。强加类似于 DMCA 的法律是这种精神的典型表现。巴西总统卢拉拒绝了 DMCA 和其它这样的要求,事实上终止了 FTAA。

自那以后,美国通过双边「自由贸易」协定对澳大利亚和墨西哥等国,还有通过《中美洲自由贸易协定》对哥斯达黎加等国,施加了类似的要求。厄瓜多尔总统科雷亚拒绝签署「自由贸易」协定,但厄瓜多尔在 2003 年采纳了类似于 DMCA 的法律。厄瓜多尔的新宪法也许提供了一个可以除掉这一法律的机会。

故事里有一个设想直到 2002 年才实际发生。这就是 FBI 和微软将持有你的个人计算机的根密码,而你却没有。

这一计划的支持者给该计划起名为「可信任计算」和「Palladium」。我们把它叫做「不可靠计算」,因为该计划的效果是使你的计算机服从其它公司,而非你。在 2007 年,这被实现为 Windows Vista 的一部分;我们认为苹果也会做类似的事情。在这一计划中,生产商将掌握密码,但 FBI 要得到它并不会有什么困难。

微软保有的并不是传统意义上的口令;没有人会在终端上输入它。确切地说,这是一个签名和加密密钥,与你的计算机上存储的第二个密钥相对应。这使得微软,甚至可能是和微软合作的站点,对用户自己的计算机拥有终极控制权。

Vista 给了微软额外的权利。举例来说,微软可以强制安装升级,并可以命令所有运行 Vista 的计算机拒绝运行某一设备驱动程序。Vista 的很多限制的主要目的就是制作用户无法克服的 DRM。

SPA,实际上代表软件出版者联合会,在这一类似于警察的角色上已被 BSA(商业软件联盟)所替代。在今天,它并不是正式的警察:但非正式地,它表现得非常像警察。它诱惑人们告发他们的同事和朋友,使用的方法让人回想起旧日的苏联。在阿根廷,2001 年的一场 BSA 的恐怖运动,暗地里威胁人们共享软件可导致被强奸。

在这个故事最初写出来的时候,SPA 正在威胁小的互联网服务提供商(ISP),要求它们允许 SPA 监控所有的用户。大部分的 ISP 在受威胁后就屈服了,因为它们担负不起在法庭还击的所需的费用。至少一个 ISP,加州奥克兰的 Community ConneXion,拒绝了这一要求,并且真的被起诉了。SPA 后来撤销了这一诉讼,但它们获得了 DMCA,法案给了它们所追寻的权利。

上面描写的大学安全策略也并非想像。比如,在芝加哥地区某大学,当你登录时计算机将印出如下信息:

本系统仅供授权用户使用。对本计算机系统非授权或超出授权的使用可能导致所有行为被系统监控并被系统工作人员记录。在对不正常使用计算机的个人进行监控时,以及在系统维护时,授权用户的行为也有可能被监控。任何使用本系统的用户都明确同意该类监控,并应知晓,如果此类监控揭示出非法行为或违反校规的证据,系统工作人员可能将该监控证据提供给校方相应部门及执法部门官员。

这可真是个有趣的对付第四修正案的方法:迫使基本上每个人提前同意,放弃他们在第四修正案下的权利。

参考资料


译者:吴咏炜

原文:https://www.gnu.org/philosophy/right-to-read.en.html

说明:这是一篇挺久之前翻译的文章。原本的特定用途已经不会再发生,发出来和大家共享。

Note: This is an article I translated quite a few years ago. Its intended usage has ceased to exist, and I am sharing it online. Recent changes at the English site are not reflected in this translation.

My Opinions Regarding the Top Five TIOBE Languages

I have written C++ for nearly 30 years. I had been advocating that it was the best language 🤣, until my love moved to Python a few years ago. I will still say C++ is a very powerful and unique language. It is probably the only language that intersects many different software layers. It lets programmers control the bit-level details, and it has the necessary mechanisms to allow programmers to make appropriate abstractions—arguably one of the best as it provides powerful generics, which are becoming better and better with the upcoming concepts and ranges in C++20. It has very decent optimizing compilers, and suitably written C++ code performs better than nearly all other languages. Therefore, C++ has been widely used in not only low-level stuff like drivers, but also libraries and applications, especially where performance is wanted, like scientific computing and games. It is still widely used in desktop applications, say, Microsoft Office and Adobe Photoshop. The power does come with a price: it is probably the most complicated computer language today. Mastering the language takes a long time (and with 30 years’ experience I dare not say I have mastered the language). Generic code also tends to take a long time to compile. Error messages can be overwhelming, especially to novices. I can go on and on, but it is better to stop here, with a note that the complexity and cost are sometimes worthwhile, in exchange for reduced latency and reduced power usage (from CPU and memory).

Python is, on the other hand, easy to learn. It is not a toy language, though: it is handy not only to novices, but also to software veterans like me. The change-and-run cycle is much shorter than C++. Code in Python is very readable, partly because lists, sets, and dictionaries are supported literal types (you cannot write in C++ an expression like {"one": 1} and let compiler deduce it is a dictionary). It has features that C++ has lacked for many years: generator/coroutine, lazy range, and so on. Generics do not need special support, as it is dynamically typed (but it also does not surprise programmers by allowing error-prone expressions like "1" + 2, as in some script languages). With a good IDE, the argument on its lack of compile-time check can be crushed—programmers can enjoy edit-time checks. It has a big ecosystem with a huge number of third-party libraries, and they are easier to take and use than in C++ (thanks to pip). The only main remaining shortcoming to me is performance, but: 1) one may write C/C++ extensions where necessary; and 2) the lack of performance may not matter at all, if your application is not CPU-bound. See my personal experience of 25x performance boost in two hours.

I used Java a long time ago. I do not like it (mostly for its verbosity), and its desktop/server implementation makes it unsuitable for short-time applications due to its sluggish launch time. However, it has always been a workhorse on the server side, and it has a successful ecosystem, if not much harmed by Oracle’s lawyers. Android also brought life to the old language and the development communities (ignoring for now the bad effects Oracle has brought about).

C# started as Microsoft’s answer to Java, but they have differed more and more since then. I actually like C#, and my experience has shown it is very suitable for Windows application development (I do not have experience with Mono, and I don’t do server development on Windows). Many of its features, like LINQ and on-stack structs, are very likeable.

C is a simple and elegant language, and it can be regarded as the ancestor of three languages above (except Python), at least in syntax. It is the most widely supported. It is the closest to metal, and is still very popular in embedded systems, OS development, and cases where maximum portability is wanted (thus the wide offerings from the open-source communities). It is the most dangerous language, as you can easily have buffer overflows. Incidentally, two of the three current answers to ‘How do you store a list of names input by the user into an array in C (not C++ or C#)?’ can have buffer overflows (and I wrote the other answer). Programmers need to tend to many details themselves.

I myself will code everything in Python where possible, as it usually requires the fewest lines of code and takes the least amount of time. If performance is wanted, I’ll go to C++. For Windows GUI applications, I’ll prefer C#. I will write in C if maximum portability and memory efficiency are wanted. I do not feel I will write in Java, except modifying existing code or when the environment supports Java only.

[I first posted it as a Quora answer, but it is probably worth a page of its own.]

25x Performance Boost in Two Hours

Our system has a find_child_regions API, which, as the name indicates, can find subregions of a region up to a certain level. It needs to look up two MongoDB collections, combine the data in a certain structure, and return the result in JSON.

One day, it was reported that the API was slow for big data sets. Tests showed that it took more than 50 seconds to return close to 6000 records. Er . . . that means the average processing speed is only about 100 records a second—not terribly slow, but definitely not ideal.

When there is a performance problem, a profiler is always your friend.1 Profiling quickly revealed that a database read function was called about twice the number of returned records, and occupied the biggest chunk of time. The reason was that the function first found out all the IDs of the regions to return, and then it read all the data and generated the result. Since the data were already read once when the IDs were returned, they could be saved and reused. I had to write a new function, which resembled the function that returned region IDs, but returned objects that contained all the data read instead (we had such a class already). I also needed to split the result-generating function into two, so that either the region IDs, or the data objects, could be accepted. (I could not change these functions directly, as they have many other users than find_child_regions; changing all of them at once would have been both risky and unnecessary.)

In about 30 minutes, this change generated the expected improvement: call time was shortened to about 30 seconds. A good start!

While the improvement percentage looked nice, the absolute time taken was still a bit long. So I continued to look for further optimization chances.

Seeing that database reading was still the bottleneck and the database read function was still called for each record returned, I thought I should try batch reading. Fortunately, I found I only needed to change one function. Basically, I needed to change something like the following

result = []
for x in xs:
    object_id = f(x)
    obj = get_from_db(object_id, …)
    if obj:
        result.append(obj)
return result

to

object_ids = [f(x) for x in xs]
return find_in_db({"_id": {"$in": object_ids}}, …)

I.e. in that specific function, all data of one level of subregions were read in one batch. Getting four levels of subregions took only four database reads, instead of 6000. This reduced the latency significantly.

In 30 minutes, the call time was again reduced, from 30 seconds to 14 seconds. Not bad!

Again, the profiler showed that database reading was still the bottleneck. I made more experiments, and found that the data object could be sizeable, whereas we did not always need all data fields. We might only need, say, 100 bytes from each record, but the average size of each region was more than 50 KB. The functions involved always read the full record, something equivalent to the traditional SQL statement ‘SELECT * FROM ...’. It was convenient, but not efficient. MongoDB APIs provided a projection parameter, which allowed callers to specify which fields to read from the collection, so I tried it. We had the infrastructure in place, and it was not very difficult. It took me about an hour to make it fully work, as many functions needed to be changed to pass the (optional) projection/field names around. When it finally worked, the result was stunning: if one only needed the basic fields about the regions, the call time could be less than 2 seconds. Terrific!

While Python is not a performant language, and I still like C++, I am glad that Python was chosen for this project. The performance improvement by the C++ language would have been negligible when the call time was more than 50 seconds, and still a small number when I improved its performance to less than 2 seconds. In the meanwhile, it would have been simply impossible for me to refactor the code and achieve the same performance in two hours if the code had been written in C++. I highly doubt whether I could have finished the job in a full day. I would probably have been fighting with the compiler and type system most of the time, instead of focusing on the logic and testing.

Life is short—choose your language wisely.


  1. Being able to profile Python programs easily was actually the main reason I purchased a professional licence of PyCharm, instead of just using the Community Edition. 

Pipenv and Relocatable Virtual Environments

Pipenv is a very useful tool to create and maintain independent Python working environments. Using it feels like a breeze. There are enough online tutorials about it, and I will only talk about one specific thing in this article: how to move a virtual environment to another machine.

The reason I need to make virtual environments movable is that our clients do not usually allow direct Internet access in production environments, therefore we cannot install packages from online sources on production servers. They also often enforce a certain directory structure. So we need to prepare the environment in our test environment, and it would be better if we did not need to worry about where we put the result on the production server. Virtual environments, especially with the help of Pipenv, seem to provide a nice and painless way of achieving this effect—if we can just make the result of pipenv install movable, or, in the term of virtualenv, relocatable.

virtualenv is already able to make most of the virtual environment relocatable. When working with Pipenv, it can be as simple as

virtualenv --relocatable `pipenv --venv`

There are two problems, though:

They are not difficult to solve, and we can conquer them one by one.

As pointed out in the issue discussion, one only needs to replace one line in activate to make it relocatable. What is originally

VIRTUAL_ENV="/home/yongwei/.local/share/virtualenvs/something--PD5l8nP"

should be changed to

VIRTUAL_ENV=$(cd $(dirname "$BASH_SOURCE"); dirname `pwd`)

To be on the safe side, I would look for exactly the same line and replace it, so some sed tricks are needed. I also need to take care of the differences between BSD sed and GNU sed, but it is a problem already solved before.

The second problem is even easier. Creating a new relative symlink solves the problem.

I’ll share the final result here, a simple script that can make a virtual environment relocatable, as well as creating a tarball from it. The archive has ‘-venv-platform’ as the suffix, but it does not include a root directory. Keep this in mind when you unpack the tarball.

#!/bin/sh

case $(sed --version 2>&1) in
  *GNU*) sed_i () { sed -i "$@"; };;
  *) sed_i () { sed -i '' "$@"; };;
esac

sed_escape() {
  echo $1|sed -e 's/[]\/$*.^[]/\\&/g'
}

VENV_PATH=`pipenv --venv`
if [ $? -ne 0 ]; then
  exit 1
fi
virtualenv --relocatable "$VENV_PATH"

VENV_PATH_ESC=`sed_escape "$VENV_PATH"`
RUN_PATH=`pwd`
BASE_NAME=`basename "$RUN_PATH"`
PLATFORM=`python -c 'import sys; print(sys.platform)'`
cd "$VENV_PATH"
sed_i "s/^VIRTUAL_ENV=\"$VENV_PATH_ESC\"/VIRTUAL_ENV=\$(cd \$(dirname \"\$BASH_SOURCE\"); dirname \`pwd\`)/" bin/activate
[ -h lib64 ] && rm -f lib64 && ln -s lib lib64
tar cvfz $RUN_PATH/$BASE_NAME-venv-$PLATFORM.tar.gz .

After running the script, I can copy result tarball to another machine of the same OS, unpack it, and then either use the activate script or set the PYTHONPATH environment variable to make my Python program work. Problem solved.

A last note: I have not touched activate.csh and activate.fish, as I do not use them. If you did, you would need to update the script accordingly. That would be your homework as an open-source user. 😼


  1. I tried removing it, and Pipenv was very unhappy. 

A VPN Issue with MTU

One environment I have access to uses a PPTP VPN to allow people to connect to the site remotely.1 One thing that had been troublesome was that there were always people complaining that they could not access the Internet after connecting to the VPN.

I was not concerned in the beginning as my test showed no problem: it seemed my browser had no problems opening http://www.taobao.com/ after connecting to the VPN. Actually, my test was flawed and limited, as I only accessed one or two sites in a virtual machine (my laptop ran a macOS version that no longer supported PPTP). More on this immediately.

Our previous VPN server had a problem, and we switched to the Linux-based pptpd last week. After the set-up was done, I checked with other users and found the web access problem persisted. This time I sat down with one user and looked into the problem together. It turned out that, after connecting to the VPN, he was able to access http://www.taobao.com/, but not http://www.baidu.com/, which was actually the default web page for many people. And I could reproduce this behaviour in my virtual machine. . . .

My experience told me that it was very much like an MTU-related problem (I have encountered plenty of MTU-related networking problems). I checked the server-side script, and found it already clamped the MSS value to 1356, while the MTU value for the PPP connections was 1396. All seemed quite reasonable.

When in doubt with a network problem, a sniffer should always be in your weaponry. I launched tcpdump on the server, and analysed the result in Wireshark. Something became clearer soon.

For the traffic between the pptpd server and Baidu (when a client visited the web site), the following things occurred:

  1. The pptpd server started a connection to the web server, with MSS = 1356
  2. The web server responded with MSS = 1380
  3. The web server soon sent a packet as large as 1420 bytes (TCP payload length is 1380 bytes)
  4. The pptpd server responded with ICMP Destination unreachable (Fragmentation needed), in which the next-hop MTU of 1396 was reported
  5. The above two steps were repeated, and nothing was improved

For the traffic between the pptpd server and Taobao, things were slightly different:

  1. The pptpd server started a connection to the web server, with MSS = 1356
  2. The web server responded with MSS = 1380
  3. The web server soon sent a packet as large as 1420 bytes (TCP payload length is 1380 bytes)
  4. The pptpd server responded with ICMP Destination unreachable (Fragmentation needed), in which the next-hop MTU of 1396 was reported
  5. A few milliseconds later, the web server began to send TCP packets no larger than 1396 bytes
  6. Now the pptpd server and the web server continued to exchange packets without any problems

Apparently there was an ICMP black hole between our server and the Baidu server, but not between our server and the Taobao server.

Once the issue was found, the solution was easy. Initially, I just ran a cron job to check all the PPP connections and changed their MTU value to 1468 (though 1420 should be good enough in my case). The better way, of course, was to change the MTU on new client connections. It could be done via the script /etc/ppp/ip-up, but the environment variable name for the network interface—which I found on the web—was wrong in the beginning. After dumping all the existing environment variables in the script, I finally got the correct name. The following line in /etc/ppp/ip-up was able to get the job done:

ifconfig $IFNAME mtu 1468

Only one thing remained mysterious now: why didn’t the MSS value in the server script take effect? A packet capture on a server I could control confirmed what I guessed, i.e. the MSS value in the TCP SYN packets from our pptpd server was clamped to 1380. It could be the router, or the ISP. Whatever it is, it really should not have clamped the value up.

In summary, problems occurred because:

  • The MSS value was increased, but pptpd did not know and still enforced a small MTU value on the PPP connections, which no longer matched the MSS
  • Path MTU discovery also failed because of the existence of ICMP black holes

Bad things can always happen, and we sometimes just have to find a way around.


  1. PPTP is not considered secure enough, but is quite convenient, especially because UDP port 500 is not usable in our case due to a router compatibility problem. 😔 

Fixing A VS2017 15.6 Installation Problem

After installing the latest Visual Studio 2017 15.6.6 (Community Edition), I found my custom setting of environment variables INCLUDE lost effect in the Developer Command Prompt. Strangely, LIB was still there. Some tracing indicated that it was a bug in the .BAT files Microsoft provided to initialize the environment. The offending lines are the following (in C:\Program Files (x86)\Microsoft Visual Studio\2017\Community\Common7\Tools\vsdevcmd\core\winsdk.bat; one of them is very long and requires scrolling for viewing):

@REM the folowing are architecture neutral
set __tmpwinsdk_include=
if "%INCLUDE%" NEQ "" set "__tmp_include=;%INCLUDE%"
set "INCLUDE=%WindowsSdkDir%include\%WindowsSDKVersion%shared;%WindowsSdkDir%include\%WindowsSDKVersion%um;%WindowsSdkDir%include\%WindowsSDKVersion%winrt;%WindowsSdkDir%include\%WindowsSDKVersion%cppwinrt%__tmpwinsdk_include%"
set __tmpwinsdk_include=

Apparently somebody missed renaming __tmp_include to __tmpwinsdk_include. Doing that myself fixed the problem.

I’ve reported the problem to Microsoft. In the meanwhile, you know how to fix it if you encounter the same problem.

On the Use of She as a Generic Pronoun

When reading the August 2017 issue of Communications of the ACM, I have been continually distracted by the use of she as a generic pronoun:

Instead of a field engineer constantly traveling between locations, she could troubleshoot machinery and refine product designs in real time . . .

There were times when one person had to be in charge while she captured the organization of the emerging article . . .

. . . we can let the user specify how much precision she wants . . .

A mathematician using “brute force” is a kind of barbaric monster, is she not?

I am not sure whether this is just my personal problem, but I find this usage obtrusive and annoying. I was reading something supposed to be objective and scientific, but the images of women kept surfacing. The last case was especially so, as I could not help envisioning a female mathematician (er, how many female mathematicians have there been?) who was also a barbaric monster, oops, how bad it was!

I dug around for a while for related resources. Before long, I realized one thing: my view is at least partly shaped by my education, which taught me that he be used as the third-person singular pronoun when the gender is unknown, for both English and Chinese. My unscientific survey shows that while many of my female friends are uncomfortable with either he or she used generically, most Chinese female friends actually prefer he to she! According to an online discussion, at least some peoples in Continental Europe still use the masculine pronoun when the gender is unknown, say, hij in Dutch and il/ils in French.1 I think the French example is quite interesting to Chinese speakers, as neither French nor Chinese has a gender-neutral third-person plural pronoun: the generic forms ils and 他们 are actually masculine forms. Unlike the English they, we never had a nice and simple way to escape the problem.

Talking about they, one fact during the search surprised me. My favourite English author, Jane Austen, apparently preferred they/their in her novels.2 Examples (emphasis is mine):

You wanted me, I know, to say ‘Yes,’ that you might have the pleasure of despising my taste; but I always delight in overthrowing those kind of schemes, and cheating a person of their premeditated contempt.

To be sure, you knew no actual good of me—but nobody thinks of that when they fall in love.

Digging deeper, it is revealed that they has been used after words like each, everybody, nobody, etc. since the Middle Ages. The entries everybody and their in the Oxford English Dictionary are nearly a demonstration of such usages, with a note in the latter entry that writes ‘Not favoured by grammarians’.3 Professor Steven Pinker also argues that using they/their/them after everyone is not only correct, but logical as well.4 Oops to the prescriptivist grammarians and my English education!

Accidentally, I encountered an old article by Douglas R. Hofstadter,5 author of the famous book Gödel, Escher, Bach: An Eternal Golden Braid (also known as GEB). It is vastly satirical, and it attacks most points I have for supporting the use of man and he (go read it; it is highly recommended even though I do not fully agree). It influenced my thinking, even though it ignored the etymology of man. The Oxford Dictionary of English has this usage note:6

Traditionally the word man has been used to refer not only to adult males but also to human beings in general, regardless of sex. There is a historical explanation for this: in Old English the principal sense of man was ‘a human being’, and the words wer and wif were used to refer specifically to ‘a male person’ and ‘a female person’ respectively. Subsequently, man replaced wer as the normal term for ‘a male person’, but at the same time the older sense ‘a human being’ remained in use. In the second half of the twentieth century the generic use of man to refer to ‘human beings in general’ (as in ‘reptiles were here long before man appeared on the earth’) became problematic; the use is now often regarded as sexist or at best old-fashioned.

Etymology is not a good representation of word meaning, but I want to point out that Hofstadter had a logical fallacy in comparing man/woman with white/black. Man did include woman at one point of time; one probably cannot say the same for white and black.

This said, the war for continued use of -man is already lost. Once aware of this issue, I do not think I want to use words like policeman again when the gender is unknown. I still do not think words like mankind, manhole, actress, or mother tongue are bad.7 The society and culture are probably a much bigger headache for women facing inequalities. . . .8

I started being angry, but ended up more understanding. And I also reached a different conclusion than I had expected. It is apparent that somebody will be offended, whether I use he, she, he or she, or they after a noun of unknown gender. I think offending grammarians would now probably be my default choice.

P.S. I have also found Professor Ellen Spertus’s article ‘Why are There so Few Female Computer Scientists?’ worth reading.9 Recommended.


  1. StackExchange discussion: Is using “he” for a gender-neutral third-person correct? Retrieved on 21 October 2017. 
  2. Henry Churchyard: Singular “their” in Jane Austen and elsewhere: Anti-pedantry page. 1999. Internet Archive. 
  3. Oxford English Dictionary. Oxford University Press, 2nd edition, 1989. 
  4. Steven Pinker: On the English singular “their” construction—from The Language Instinct. 1994. Internet Archive. 
  5. Douglas R. Hofstadter: A Person Paper on Purity in Language. 1985. Internet Archive. 
  6. Oxford Dictionary of English. Oxford University Press, macOS built-in edition, 2016. This is different from the famous OED
  7. These words are already banned in some places. See entry sexist language in R. W. Burchfield: Fowler’s Modern English Usage. Oxford University Press, revised 3rd edition, 2004. 
  8. Henry Etzkowitz et al.: Barriers to Women in Academic Science and Engineering. 1994. Internet Archive. 
  9. Ellen Spertus: Why are There so Few Female Computer Scientists? 1991. Internet Archive. 

A Journey of Purely Static Linking

As I mentioned last time, I found Microsoft has really messed up its console Unicode support when the C runtime DLL (as versus the static runtime library) is used. So I decided to have a try with linking everything statically in my project that uses C++ REST SDK (a.k.a. cpprestsdk). This is not normally recommended, but in my case it has two obvious advantages:

  • It would solve the Unicode I/O problem.
  • It would be possible to ship just the binaries without requiring the target PC to install the Microsoft Visual C++ runtime.

It took me several hours to get it rolling, but I felt it was worthwhile.

Before I start, I need to mention that cpprestsdk has a solution file that supports building a static library. It turned out not satisfactory:

  • It used NuGet packages for Boost and OpenSSL, and both versions were out of date. Worse, my Visual Studio 2017 IDE hung while I tried to update the packages. Really a nuisance.
  • The static library, as well as all its dependencies like Boost and OpenSSL, still uses the C runtime DLL. I figured it might be easier to go completely on my own.

Prerequisites

Boost

This part is straightforward. After going into the Boost directory, I only need to type (I use version 1.65.1):

bootstrap.bat
.\b2.exe toolset=msvc -j 2 --with-chrono --with-date_time --with-regex --with-system --with-thread release link=static runtime-link=static stage

OpenSSL

As I already have Perl and NASM installed, installing OpenSSL is trivial too (I use version 1.0.2l):

perl Configure VC-WIN32 --prefix=C:/Libraries/OpenSSL
ms\do_nasm
nmake -f ms\nt.mak
nmake -f ms\nt.mak install

zlib

This part requires a small change to the build script (for version 1.2.11). I need to open win32\Makefile.msc and change all occurrences of ‘-MD’ to ‘-MT’. Then these commands will work:

nmake -f win32\Makefile.msc zlib.lib
mkdir C:\Libraries\zlib
mkdir C:\Libraries\zlib\include
mkdir C:\Libraries\zlib\lib
copy zconf.h C:\Libraries\zlib\include
copy zlib.h C:\Libraries\zlib\include
copy zlib.lib C:\Libraries\zlib\lib

Building C++ REST SDK

We need to set some environment variables to help the CMake build system find where the libraries are. I set them in ‘Control Panel > System > Advanced system settings > Environment variables’:1

BOOST_ROOT=C:\src\boost_1_65_1
OPENSSL_ROOT_DIR=C:\Libraries\OpenSSL
INCLUDE=C:\Libraries\zlib\include
LIB=C:\Libraries\zlib\lib

(The above setting assumes Boost is unpacked under C:\src.)

We would need to create the solution files for the current environment under cpprestsdk:

cd Release
mkdir build
cd build
cmake ..

If the environment is set correctly, the last command should succeed and report no errors. A cpprest.sln should be generated now.

We then open this solution file. As we only need the release files, we should change the ‘Solution Configuration’ from ‘Debug’ to ‘Release’. After that, we need to find the project ‘cpprest’ in ‘Solution Explorer’, go to its ‘Properties’, and make the following changes under ‘General’:

  • Set Target Name to ‘cpprest’.
  • Set Target Extension to ‘.lib’.
  • Set Configuration Type to ‘Static library (.lib)’.

And the most important change we need under ‘C/C++ > Code Generation’:

  • Set Runtime Library to ‘Multi-threaded (/MT)’.

Click on ‘OK’ to accept the changes. Then we can build this project.

Like zlib, we need to copy the header and library files to a new path, and add the include and lib directories to environment variables INCLUDE and LIB, respectively. In my case, I have:

INCLUDE=C:\Libraries\cpprestsdk\include;C:\Libraries\zlib\include
LIB=C:\Libraries\cpprestsdk\lib;C:\Libraries\zlib\lib

Change to my Project

Of course, the cpprestsdk-based project needs to be adjusted too. I will first show the diff, and then give some explanations:

--- a/CMakeLists.txt
+++ b/CMakeLists.txt
@@ -24,14 +24,25 @@ set(CMAKE_CXX_FLAGS "${ELPP_FLAGS}")

 if(WIN32)
 set(CMAKE_CXX_FLAGS "${CMAKE_CXX_FLAGS}\
- -DELPP_UNICODE -D_WIN32_WINNT=0x0601")
-set(USED_LIBS Boost::dynamic_linking ${Boost_DATE_TIME_LIBRARY} ${Boost_SYSTEM_LIBRARY} ${Boost_THREAD_LIBRARY})
+ -DELPP_UNICODE -D_WIN32_WINNT=0x0601 -D_NO_ASYNCRTIMP")
+set(USED_LIBS Winhttp httpapi bcrypt crypt32 zlib)
 else(WIN32)
 set(USED_LIBS "-lcrypto" OpenSSL::SSL ${Boost_CHRONO_LIBRARY} ${Boost_SYSTEM_LIBRARY} ${Boost_THREAD_LIBRARY})
 endif(WIN32)

 if(MSVC)
 set(CMAKE_CXX_FLAGS "${CMAKE_CXX_FLAGS} /EHsc /W3")
+set(CompilerFlags
+        CMAKE_CXX_FLAGS
+        CMAKE_CXX_FLAGS_DEBUG
+        CMAKE_CXX_FLAGS_RELEASE
+        CMAKE_C_FLAGS
+        CMAKE_C_FLAGS_DEBUG
+        CMAKE_C_FLAGS_RELEASE)
+foreach(CompilerFlag ${CompilerFlags})
+  string(REPLACE "/MD" "/MT" ${CompilerFlag}
+         "${${CompilerFlag}}")
+endforeach()
 else(MSVC)
 set(CMAKE_CXX_FLAGS "${CMAKE_CXX_FLAGS} -W -Wall -Wfatal-errors")
 endif(MSVC)

There are two blocks of changes. In the first block, one can see that the Boost libraries are no longer needed, but, instead, one needs to link the Windows dependencies of cpprestsdk (I found the list in Release\build\src\cpprest.vcxproj), as well as zlib. One also needs to explicitly define _NO_ASYNCRTIMP so that the cpprestsdk functions will be not treated as dllimport.

As CMake defaults to using ‘/MD’, the second block of changes replaces all occurrences of ‘/MD’ with ‘/MT’ in the compiler flags.2 With these changes, I am able to generate an executable without any external dependencies.

A Gotcha

I am now used to using cmake without specifying the ‘-G’ option on Windows. By default, CMake generates Visual Studio project files: they have several advantages, including multiple configurations (selectable on the MSBuild command line like ‘/p:Configuration=Release’), and parallel building (say, using ‘/m:2’ to take advantage of two processor cores). Neither is possible with nmake. However, the executables built by this method still behave abnormally regarding outputting non-ASCII characters. Actually, I believe everything is still OK at the linking stage, but the build process then touches the executables in some mysterious way and the result becomes bad. I am not familiar with MSBuild well enough to manipulate the result, so I am going back to using ‘cmake -G "NMake Makefiles" -DCMAKE_BUILD_TYPE=Release’ followed by ‘nmake’ for now.


  1. CMake can recognize Boost and OpenSSL by some known environment variables. I failed to find one that really worked for zlib, so the INCLUDE and LIB variables need to be explicitly set. 
  2. This technique is shamelessly copied from a StackOverflow answer

Another Microsoft Unicode I/O Problem

I encountered an annoying bug in Visual C++ 2017 recently. It started when I found my favourite logging library, Easylogging++, output Chinese as garbage characters on Windows. Checking the documentation carefully, I noted that I should have used the macro START_EASYLOGGINGPP. It turned out to be worse: all output starting from the Chinese character was gone. Puzzled but busy, I put it down and worked on something else.

I spend another hour of detective work on this issue today. The result was quite surprising.

  • First, it is not an issue with Easylogging++. The problem can occur if I purely use std::wcout.
  • Second, the magical thing about START_EASYLOGGINGPP is that it will invoke std::locale::global(std::locale("")). This is the switch that leads to the different behaviour.
  • Myteriously, with the correct locale setting, I can get the correct result with both std::wcout and Easylogging++ in a test program. I was not able to get it working in my real project.
  • Finally, it turns out that the difference above is caused by /MT vs. /MD! The former (default if neither is specified on the command line) tells the Visual C++ compiler to use the static multi-threading library, and the latter (set by default in Visual Studio projects) tells the compiler to use the dynamic multi-threading library.

People may remember that I wrote about MSVCRT.DLL Console I/O Bug. While Visual C++ 2013 shows consistent behaviour between /MT and /MD, Visual C++ 2015 and 2017 exhibit the same woeful bug when /MD is specified on the command line. This is something perplexingly weird: it seems someone at Microsoft messed up with the MSVCRT.DLL shipped with Windows first (around 2006), and then the problem spread to the Visual C++ runtime DLL after nearly a decade!

I am using many modern C++ features, so I definitely do not want to go back to Visual C++ 2013 for new projects. It seems I have to tolerate garbage characters in the log for now. Meanwhile, I submitted a bug to Microsoft. Given that I have a bug report that is deferred for four years, I am not very hopeful. But let us wait and see.

Update (20 December 2017)

A few more tests show that the debug versions (/MTd and /MDd) both work well. So only the default release build (using the dynamic C runtime) exhibits this problem, where the executable depends on DLLs like api-ms-win-crt-stdio-l1-1-0.dll. It seems this issue is related to the Universal C Runtime introduced in Visual Studio 2015 and Windows 10. . . .

Update (25 March 2018)

The bug was closed, and a Microsoft developer indicated that the issue had already been fixed since the Windows 10 Anniversary Update SDK (build 10.0.14393). Actually I had had build 10.0.15063 installed. The reason why I still saw the problem was that the Universal C Runtime on Windows 7 had not been updated (‘the issue will be fixed in a future update to the Universal C Runtime on Windows 7’), and I should not have seen the problem on a Windows 10 box. The current workaround is either use static linking (as I did), or copy the redistributable DLLs under C:\Program Files (x86)\Windows Kits\10\Redist\ucrt\DLLs\x86 (or x64 etc.) to the app directory (so called ‘app-local deployment’; which should not be used on Windows 10, as the system version is always preferred). My test showed that copying ucrtbase.dll was enough to fix my test case.

C/C++ Performance, mmap, and string_view

A C++ discussion group I participated in got a challenge last week. Someone posted a short Perl program, and claimed that it was faster than the corresponding C version. It was to this effect (with slight modifications):

open IN, "$ARGV[0]";
my $gc = 0;
while (my $line = ) {
  $gc += ($line =~ tr/cCgG//);
}
print "$gc\n";
close IN

The program simply searched and counted all occurrences of the letters ‘C’ and ‘G’ from the input file in a case-insensitive way. Since the posted C code was incomplete, I wrote a naïve implementation to test, which did turn out to be slower than the Perl code. It was about two times as slow on Linux, and about 10 times as slow on macOS.1

FILE* fp = fopen(argv[1], "rb");
int count = 0;
int ch;
while ((ch = getc(fp)) != EOF) {
    if (ch == 'c' || ch == 'C' || ch == 'g' || ch == 'G') {
        ++count;
    }
}
fclose(fp);

Of course, it actually shows how optimized the Perl implementation is, instead of how inferior the C language is. Before I had time to test my mmap_line_reader, another person posted an mmap-based solution, to the following effect (with modifications):

int fd = open(argv[1], O_RDONLY);
struct stat st;
fstat(fd, &st);
int len = st.st_size;
char ch;
int count = 0;
char* ptr = mmap(NULL, len, PROT_READ, MAP_PRIVATE, fd, 0);
close(fd);
char* begin = ptr;
char* end = ptr + len;
while (ptr < end) {
    ch = *ptr++;
    if (ch == 'c' || ch == 'C' || ch == 'g' || ch == 'G')
        ++count;
}
munmap(begin, len);

When I tested my mmap_line_reader, I found that its performance was only on par with the Perl code, but slower than the handwritten mmap-based code. It is not surprising, considering that mmap_line_reader copies the line content, while the C code above searches directly in the mmap’d buffer.

I then thought of the C++17 string_view.2 It seemed a good chance of using it to return a line without copying its content. It was actually easy refactoring (on code duplicated from mmap_line_reader), and most of the original code did not require any changes. I got faster code for this test, but the implementations of mmap_line_reader and the new mmap_line_reader_sv were nearly identical, except for a few small differences.

Naturally, the next step was to refactor again to unify the implementations. I made a common base to store the bottommost mmap-related logic, and made the difference between string and string_view disappear with a class template. Now mmap_line_reader and mmap_line_reader_sv were just two aliases of specializations of basic_mmap_line_reader!

While mmap_line_reader_sv was faster than mmap_line_reader, it was still slower than the mmap-based C code. So I made another abstraction, a ‘container’ that allowed iteration over all of the file content. Since the mmap-related logic was already mostly separated, only some minor modifications were needed to make that base class fully independent of the line reading logic. After that, adding mmap_char_reader was easy, which was simply a normal container that did not need to mess with platform-specific logic.

At this point, all seemed well—except one thing: it only worked on Unix. I did not have an immediate need to make it work on Windows, but I really wanted to show that the abstraction provided could work seamlessly regardless of the platform underneath. After several revisions, in which I dealt with proper Windows support,3 proper 32- and 64-bit support, and my own mistakes, I finally made it work. You can check out the current code in the nvwa repository. With it, I can finally make the following simple code work on Linux, macOS, and Windows, with the same efficiency as raw C code when fully optimized:4

#include <iostream>
#include <stdlib.h>
#include "nvwa/mmap_byte_reader.h"

int main(int argc, char* argv[])
{
    if (argc != 2) {
        std::cerr << "A file name is needed" << std::endl;
        exit(1);
    }

    try {
        int count = 0;
        for (char ch : nvwa::mmap_char_reader(argv[1]))
            if (ch == 'c' || ch == 'C' ||
                ch == 'g' || ch == 'G')
                ++count;
        std::cout << count << std::endl;
    }
    catch (std::exception& e) {
        std::cerr << e.what() << std::endl;
    }
}

Even though it is not used in the final code, I love the C++17 string_view. And I like the simplicity I finally achieved. Do you?

P.S. The string_view-based test code is posted here too as a reference. Line-based processing is common enough!5

#include <iostream>
#include <stdlib.h>
#include "nvwa/mmap_line_reader.h"

int main(int argc, char* argv[])
{
    if (argc != 2) {
        std::cerr << "A file name is needed" << std::endl;
        exit(1);
    }

    try {
        int count = 0;
        for (const auto& line :
                nvwa::mmap_line_reader_sv(argv[1]))
            for (char ch : line)
                if (ch == 'c' || ch == 'C' ||
                    ch == 'g' || ch == 'G')
                    ++count;
        std::cout << count << std::endl;
    }
    catch (std::exception& e) {
        std::cerr << e.what() << std::endl;
    }
}

Update (18 September 2017)

Thanks to Alex Maystrenko (see the comments below), It is now understood that the reason why getc was slow was because there was an implicit lock around file operations. I did not expect it, as I grew from an age when multi-threading was the exception, and I had not learnt about the existence of getc_unlocked until he mentioned it! According to the getc_unlocked page in the POSIX specification:

Some I/O functions are typically implemented as macros for performance reasons (for example, putc() and getc()). For safety, they need to be synchronized, but it is often too expensive to synchronize on every character. Nevertheless, it was felt that the safety concerns were more important; consequently, the getc(), getchar(), putc(), and putchar() functions are required to be thread-safe. However, unlocked versions are also provided with names that clearly indicate the unsafe nature of their operation but can be used to exploit their higher performance.

After replacing getc with getc_unlocked, the naïve implementation immediately outperforms the Perl code on Linux, though not on macOS.6

Another interesting thing to notice is that GCC provides vastly optimized code for the comparison with ‘c’, ‘C’, ‘g’, and ‘G’,7 which is extremely unlikely for interpreted languages like Perl. Observing the codes for the characters are:

  • 010000112 or 6710 (‘C’)
  • 010001112 or 7110 (‘G’)
  • 011000112 or 9910 (‘c’)
  • 011001112 or 10310 (‘g’)

GCC basically ANDs the input character with 110110112, and compares the result with 010000112. In Intel-style assembly:

        movzx   ecx, BYTE PTR [r12]
        and     ecx, -37
        cmp     cl, 67

It is astoundingly short and efficient!


  1. I love my Mac, but I do feel Linux has great optimizations. 
  2. If you are not familiar with string_view, check out its reference
  3. Did I mention how unorthogonal and uncomfortable the Win32 API is, when compared with POSIX? I am very happy to hide all the ugliness from the application code. 
  4. Comparison was only made on Unix (using the POSIX mmap API), under GCC with ‘-O3’ optimization (‘-O2’ was not enough). 
  5. -std=c++17’ must be specified for GCC, and ‘/std:c++latest’ must be specified for Visual C++ 2017. 
  6. However, the performance improvement is more dramatic on macOS, from about 10 times slower to 5% slower. 
  7. Of course, this is only possible when the characters are given as compile-time constants.