Статьи

В то время это казалось хорошей идеей: MongoReplicaSetClient

Дорога

Дорога в ад вымощена благими намерениями.

Я пишу сообщения для четырех прискорбных решений в PyMongo , стандартном драйвере Python для MongoDB. Каждое из этих решений делало жизнь Берни Хакетт и меня, сопровождающих PyMongo, болезненной, и приводило в замешательство наших пользователей. Этой зимой мы готовим PyMongo 3.0, и у нас есть возможность исправить их все. Когда я вычеркиваю эти прискорбные проекты, я спрашиваю, что пошло не так?

Я завершаю серию окончательным прискорбным решением: MongoReplicaSetClient.


Начало

В январе 2011 года Берни Хакетт поддерживал PyMongo в одиночку. Первый автор PyMongo, Майк Дирольф, ушел, а я еще не присоединился.

Наборы реплик были выпущены в MongoDB 1.6 годом ранее , в 2010 году. Они устарели старой системой «репликация главный-подчиненный», которая не выполняла автоматическое аварийное переключение, если основной компьютер умер. В наборах реплик, если первичный умирает, вторичные серверы выбирают новый первичный сразу.

У PyMongo 2.0 был один клиентский класс, называемый Connection. К тому времени, как наша история начинается, Берни добавил большинство функций набора реплик, необходимых для подключения. Учитывая имя набора реплики и адреса одного или нескольких участников, он может обнаружить весь набор и подключиться к первичному. Например, с набором из трех узлов и основным на порту 27019:

>>> # Obsolete code.
>>> from pymongo import Connection
>>> c = Connection('localhost:27017,localhost:27018',
...                replicaset='repl0',
...                safe=True)
>>> c
Connection([u'localhost:27019', 'localhost:27017', 'localhost:27018'])
>>> c.port  # Current primary's port.
27019

If there was a failover, Connection’s next operation failed, but it found and connected to the primary on the operation after that:

>>> c.db.collection.insert({})
error: [Errno 61] Connection refused
>>> c.db.collection.insert({})
ObjectId('548ef36eca1ce90d91000007')
>>> c.port  # What port is the new primary on?
27018

(Note that PyMongo 2.0 threw a socket error after a failover: we consistently wrap errors in our ConnectionFailure exception class now.)

Reading From Secondaries

The Connection class’s replica set features were pretty well-rounded, actually. But a user asked Bernie for a new feature: he wanted a convenient way to query from secondaries. Our Ruby and Node drivers supported this feature using a different connection class. So in late 2011, just as I was joining the company, Bernie wrote a new class, ReplicaSetConnection. Depending on your read preference, it would read from the primary or a secondary:

>>> from pymongo import ReplicaSetConnection, ReadPreference
>>> rsc = ReplicaSetConnection(
...    'localhost:27017,localhost:27018',
...    replicaset='repl0',
...    read_preference=ReadPreference.SECONDARY,
...    safe=True)

Besides distributing reads to secondaries, the new ReplicaSetConnection had another difference from Connection: a monitor thread. Every 30 seconds, the thread proactively updated its view of the replica set’s topology. This gave ReplicaSetConnection two advantages. First, it could detect when a new secondary had joined the set, and start using it for reads. Second, even if it was idle during a failover, after 30 seconds it would detect the new primary and use it for the next operation, instead of throwing an error on the first try.

ReplicaSetConnection was mostly the same as the existing Connection class. But it was different enough that there was some risk: the new code might have new bugs. Or at least, it might have surprising differences from Connection’s behavior.

PyMongo has special burdens, since it’s the intersection between two huge groups: MongoDB users and the Python world, possibly the largest language community in history. These days PyMongo is downloaded half a million times a month, and back then its stats were big, too. So Bernie tread very cautiously. He didn’t force you to use the new code right away. Instead, he made a separate class you could opt in to. He released ReplicaSetConnection in PyMongo 2.1.

The Curse

But we never merged the two classes.

Ever since November 2011, when Bernie wrote ReplicaSetConnection and I joined MongoDB, we’ve maintained ReplicaSetConnection’s separate code. It gained features. It learned to run mapreduce jobs on secondaries. Its read preference options expanded to include members’ network latency and tags. Connection gained distinct features, too, diverging further from ReplicaSetConnection: it can connect to the nearest mongos from a list of them, and fail over to the next if that mongos goes down. Other features applied equally to both classes, so we wrote them twice. We had two tests for most of these features. When we renamed Connection to MongoClient, we also renamed ReplicaSetConnection to MongoReplicaSetClient. And still, we didn’t merge them.

The persistent, slight differences between the two classes persistently confused our users. I remember my feet aching as I stood at our booth at PyCon in 2013, explaining to a user when he should use MongoClient and when he should use MongoReplicaSetClient—and I remember his expression growing sourer each minute as he realized how irrational the distinction was.

I explained it again during MongoDB Office Hours, when I sat at a cafeteria table with a couple users, soon after we moved to the office in Times Square. And again, I saw the frustration on their faces. I explained it on Stack Overflow a couple months later. I’ve been explaining this for as long as I’ve worked here.

The Curse Is Lifted

This year, two events conspired to kill MongoReplicaSetClient. First, we resolved to write a PyMongo 3.0 with a cleaned-up API. Second, I wrote the Server Discovery And Monitoring Spec, a comprehensive description of how all our drivers should connect to a standalone server, a set of mongos servers, or a replica set. This spec closely followed the design of our Java and C# drivers, which never had a ReplicaSetConnection. These drivers each have a single class that connects to any kind of MongoDB topology.

Since the Server Discovery And Monitoring Spec provides the algorithm to connect to any topology with the same class, I just followed my spec and wrote a unified MongoClient for PyMongo 3. For the sake of backwards compatibility, MongoReplicaSetClient lives a while longer as an empty, deprecated subclass of MongoClient.

The new MongoClient has many advantages over both its ancestors. Mainly, it’s concurrent: it connects to all the servers in your deployment in parallel. It runs your operations as soon as it finds any suitable server, while it continues to discover the rest of the deployment using background threads. Since it discovers and monitors all servers in parallel, it isn’t hampered by a down server, or a distant one. It will be responsive even with the very large replica sets that will be possible in MongoDB 2.8, or the even larger ones we may someday allow.

Unifying the two classes also makes MongoDB URIs more powerful. Let’s say you develop your Python code against a standalone mongod on your laptop, then you test in a staging environment with a replica set, then deploy to a sharded cluster. If you set the URI with a config file or environment variable, you had to write code like this:

# PyMongo 2.x.
from pymongo.uri_parse import parse_uri

uri = os.environ['MONGODB_URI']
if 'replicaset' in parse_uri(uri)['options']:
    client = MongoReplicaSetClient(uri)
else:
    client = MongoClient(uri)

This is annoying. Now, the URI controls everything:

# PyMongo 3.0.
client = MongoClient(os.environ['MONGODB_URI'])

Configuration and code are properly separated.

The Moral Of The Story

I need your help—what is the moral? What should we have done differently?

When Bernie added read preferences and a monitor thread to PyMongo, I understand why he didn’t overhaul the Connection class itself. The new code needed a shakedown cruise before it could be the default. You ask, «Why not publish a beta?» Few people install betas of PyMongo. Customers do thoroughly test early releases of the MongoDB server, but for PyMongo they just use the official release. So if we published a beta and received no bug reports, that wouldn’t prove anything.

Bernie wanted the new code exercised. So it needed to be in a release. He had to commit to an API, so he published ReplicaSetConnection alongside Connection. Once ReplicaSetConnection was published it had to be supported forever. And worse, we had to maintain the small differences between Connection and ReplicaSetConnection, for backwards compatibility.

Maybe the moment to merge them was when we introduced MongoClient in late 2012. You had to choose to opt into MongoClient, so we could have merged the two classes into one new class, instead of preserving the distinction and creating MongoReplicaSetClient. But the introduction of MongoClient was complex and urgent; we didn’t have time to unify the classes, too. It was too much risk at once.

I think the moral is: cultivate beta testers. That’s what I did with Motor, my asynchronous driver for Tornado and MongoDB. It had long alpha and beta phases where I pressed developers to try it. I found PyMongo and AsyncMongo users and asked them to try switching to Motor. I kept a list of Motor testers and checked in with them occasionally. I ate my own hamster food: I used Motor to build the blog you’re reading. Once I had some reports of Motor in production, and I saw it mentioned on Stack Overflow, and I discovered projects that depended on Motor in GitHub, I figured I had users and it was time for an official release.

Not all these methods will work for an established project like PyMongo, but still: for PyMongo 3.0, we should ask our community to help shake out the bugs.

When the beta is ready, will you help?


This is the final installment in my four-part series on regrettable decisions we made with PyMongo.