meteor从入门到精通

by Elie Steinbock

埃莉·斯坦博克(Elie Steinbock)

我已经大规模运行Meteor一年了这就是我所学到的。 (I’ve been running Meteor at scale for a year now. Here’s what I’ve learned.)

A year ago I wrote an article describing my first experiences scaling Meteor. In short, I created a popular fantasy football website using Meteor. At a certain point, my service started slowing down. The single server I had running the game could no longer handle the load. I was able to solve these early scaling issues by — among other things — adding additional servers.

一年前,我写了一篇文章,描述了我第一次体验流星缩放的经验 。 简而言之,我使用Meteor创建了一个受欢迎的幻想足球网站。 在某个时候,我的服务开始变慢。 我运行游戏的单个服务器无法再处理负载。 通过添加其他服务器,我能够解决这些早期扩展问题。

Well, when last summer’s new season of football arrived, I once again ran into scaling issues. Adding more servers alone wouldn’t solve these problems. But I did manage to overcome them.

好吧,当去年夏天足球的新赛季到来时,我再次遇到了规模问题。 仅添加更多服务器并不能解决这些问题。 但是我确实设法克服了它们。

This article will explain things I learned this time around, broken down into six pieces of practical advice.

本文将解释我这次学到的东西,分为六段实用建议。

One thing that has changed since my last article is that the Meteor Development Group has finally released Galaxy, which gives you Meteor hosting at $29 per container per month. This doesn’t include database hosting, but you can use something like Compose or mLab for that. Alternatively, you can self-host the database on AWS or DigitalOcean. This will be cheaper, but will require more work on your part.

自从我上一篇文章以来发生的一件变化是,Meteor Development Group终于发布了Galaxy ,它为您提供Meteor托管,每个容器每月29美元。 这不包括数据库托管,但是您可以使用诸如Compose或mLab之类的东西。 或者,您可以在AWS或DigitalOcean上自托管数据库。 这样会更便宜,但需要您做更多的工作。

I myself use DigitalOcean for hosting. The site runs on $5/month, 512MB droplets with one Meteor instance running per droplet. I use Meteor Up (Mup) for deployment and Compose.io for database hosting.

我自己使用DigitalOcean进行托管。 该站点以每月5美元的价格运行512MB的小滴,每个小滴运行一个Meteor实例。 我使用Meteor Up (Mup)进行部署,并使用Compose.io进行数据库托管。

Whether to go with DigitalOcean or Galaxy is up to you. Galaxy will do a bunch of stuff for you and will save you time. Going the DigitalOcean route will save you $24 per container per month. For a lot of companies going with Galaxy makes the most sense since developer salaries will be far more expensive. In any case, I’ll leave the business decisions up to you.

是否选择DigitalOcean或Galaxy取决于您。 Galaxy将为您做很多事情,并节省您的时间。 采取DigitalOcean路线将为您每个集装箱每月节省$ 24。 对于许多公司而言,选择Galaxy最为合理,因为开发人员的薪水将昂贵得多。 无论如何,我都会将业务决策留给您。

Moving on. There are a few things that really helped scale our Meteor app this summer. We had some bad days. It really wasn’t smooth sailing at times, but we got through it.

继续。 今年夏天,有几件事确实有助于扩展我们的Meteor应用程序。 我们过得很糟糕。 有时候确实不是一帆风顺,但我们还是成功了。

总结经验 (A summary of lessons learned)

Here are the major lessons I learned from my year of scaling:

这是我从扩展一年中学到的主要课程:

Lesson #1: MongoDB indexes are super important!

第1课:MongoDB索引非常重要!

Lesson #2: Having too many Meteor instances is a problem!

第2课:流星实例过多是一个问题!

Lesson #3: Don’t worry about scaling Nginx.

第3课:不必担心缩放Nginx。

Lesson #4: Disconnect users when they’ve been away for a while

第4课:离开用户一段时间后断开他们的连接

Lesson #5: Will Griggs is on fire

第五课:威尔·格里格斯着火了

Lesson #6: Take a cue from how they scaled Pokemon Go

第6课:从他们如何缩放Pokemon Go中获得启示

Let’s go through these one by one.

让我们一一讲解。

第1课:MongoDB索引非常重要! (Lesson 1: MongoDB indexes are super important!)

So this one was an amateur mistake. Every article on scaling Meteor (or MongoDB) tells you to use indexes. And I did! But one index was missing, and I got burned for it — really burned — on the most important night of the year for us.

因此,这是一个业余错误。 关于缩放流星(或MongoDB)的每篇文章都告诉您使用索引。 而我做到了! 但是缺少一个索引,在一年中最重要的夜晚,我为之着迷—确实为之着迷。

Explaining indexes by way of example. If you have 10,000 player scores and want to find the highest score, in a regular case Mongo would have to go through all these scores to find the highest one. If you have an index on the score, then Mongo saves a copy of all the scores in either ascending or descending order, and will find the highest score in a fraction of the time. You can read more about indexes on the MongoDB website.

通过示例解释索引。 如果您有10,000个玩家分数并希望找到最高分数,那么在通常情况下,Mongo必须仔细检查所有这些分数才能找到最高分数。 如果您在乐谱上有一个索引,则Mongo会以升序或降序保存所有乐谱的副本,并会在短时间内找到最高乐谱。 您可以在MongoDB 网站上阅读有关索引的更多信息。

In a Meteor project, I recommend having one publications.js file that contains all your publications. Below each publication you should have code that creates the necessary index for each publication. The code to create an index in Meteor looks something this:

在Meteor项目中,建议您使用一个包含您所有出版物的publications.js文件。 在每个出版物的下方,您应具有为每个出版物创建必要索引的代码。 在Meteor中创建索引的代码如下所示:

Meteor.startup(function () {    Teams._ensureIndex({ userId: 1 });});

The _id field has an index by default so you don’t need to worry about that.

_id字段默认情况下具有索引,因此您不必为此担心。

Getting into the details. I use Compose.io for MongoDB hosting. They’ve been fine, and support has also been okay, but don’t listen to them when they think all your problems can be solved by adding more RAM. This isn’t true. It might work sometimes, but in my case it was just nonsense advice.

深入细节。 我使用Compose.io进行MongoDB托管。 他们很好,支持也还不错,但是当他们认为可以通过添加更多RAM解决所有问题时,请不要听他们的话。 这不是真的 有时可能会奏效,但就我而言,这只是废话。

I use Kadira.io for performance monitoring. Every Meteor app should use Kadira and the basic package is great and free, so no reason not to. (Update: Kadira is currently still an obvious choice for Meteor apps, but the team behind Kadira recently moved away from Meteor, so beware of that for the future.)

我使用Kadira.io进行性能监控。 每个Meteor应用程序都应使用Kadira,并且基本软件包非常棒且免费,因此没有理由不这样做。 (更新:Kadira目前仍然是Meteor应用程序的明显选择,但Kadira背后的团队最近离开了Meteor,因此请留意未来。)

In Kadira I was seeing graphs such as these:

在Kadira,我看到了如下图:

At a certain point the PubSub and Methods Response time become ridiculously large. Anything above 1,000 ms to respond is problematic. Even a 500ms response time can be bad. But 10–20 seconds as an average response time for an hour straight basically means your users hate you and your app is barely working for them.

在某一点上,PubSub和方法响应时间变得非常大。 任何高于1000毫秒的响应时间都是有问题的。 即使500ms的响应时间也可能很糟糕。 但是,平均响应时间为10到20秒(连续一个小时)基本上意味着您的用户讨厌您,而您的应用几乎无法为他们工作。

In general, when things are performing slowly, I just add more servers. And I did that here too, except this time, adding more servers just made things worse. Far, far worse. But we’ll get to that later.

通常,当事情进展缓慢时,我只添加更多服务器。 我也这样做了,除了这次,添加更多服务器只会使情况变得更糟。 差远了。 但是我们稍后再讲。

So at this point, what you do is you scramble around Google and spam StackOverflow and the Meteor forums.

因此,此时,您要做的是围绕Google和垃圾邮件StackOverflow和Meteor论坛进行争夺。

Eventually I landed upon this gem in the Kadira dashboards:

最终,我在Kadira仪表板上找到了这个宝石:

From this we see that the database is taking forever to respond. Adding more Meteor instances is not going to help us here. We need to sort out Mongo.

从中我们可以看到,数据库需要花费很多时间进行响应。 添加更多的Meteor实例不会对我们有帮助。 我们需要整理一下Mongo。

Kadira was no good at showing me why the database was responding so slowly. Every publication and method was showing a very high database response time.

Kadira不能很好地向我展示为什么数据库响应如此缓慢。 每个出版物和方法都显示出很高的数据库响应时间。

The answer came from visiting Compose.io at peak times. On the dashboard, you can have a look at the current ops (current operations) that are running at any given moment. I saw something like this (but far worse):

答案来自在高峰时间访问Compose.io。 在仪表板上,您可以查看在任何给定时刻运行的当前操作(当前操作)。 我看到了这样的东西(但更糟糕的是):

I had no idea what all this mumbo jumbo was, but you’ll see that each op has a secs_running field. In the image above it says 0 seconds for everything, which is great! But what I was seeing during peak time was 14 seconds, 9 seconds, 10 seconds… for the different operations that were going on! And it was all coming from the same query being made by my app.

我不知道这是什么庞然大物 ,但是您会看到每个操作都有一个secs_running字段。 在上面的图像中,所有内容都显示为0秒,这太棒了! 但是我在高峰时间看到的是14秒,9秒,10秒……正在进行的各种操作! 都是来自我的应用程序进行的同一查询。

I ran this query myself and it really did take something like 16 seconds to get a response! Not good! And running it with explain (as some on the Meteor forums suggested) showed that 180,000+ documents were being scanned! Here is one of the problematic queries:

我自己运行了这个查询,确实花了大约16秒的时间才能得到答复! 不好! 并运行带有explain(如Meteor论坛上的某些建议)显示,正在扫描180,000多个文档! 这是有问题的查询之一:

Anyway… lo and behold, there’s no index set up for this query. I added the following indexes:

无论如何……瞧,没有为该查询设置索引。 我添加了以下索引:

Meteor.startup(function () {    HeadToHeadMatches._ensureIndex({ team1Id: 1, gameweek: 1 });  HeadToHeadMatches._ensureIndex({ team2Id: 1, gameweek: 1 });});

After this the whole database starts acting quickly again. This one problematic query was slowing down the entire database!

此后,整个数据库又开始快速动作。 这个有问题的查询正在减慢整个数据库的速度!

UPDATE #1: based on Josh Owen’s comment, a better way to do add indexes is to use Collection.rawCollection and createIndex, but the above code will work for you till at least Meteor 1.4.2.

更新1:基于Josh Owen的评论 ,添加索引的更好方法是使用Collection.rawCollection和createIndex ,但是上述代码至少在Meteor 1.4.2之前适用。

UPDATE #2: indexes are more complicated than I first thought having run into trouble with them again this week. You probably won’t be able to find all your queries that need indexes without looking through your logs.

更新#2:索引比我最初认为在本周再次遇到麻烦要复杂得多。 如果不查看日志,可能无法找到所有需要索引的查询。

You need to find all queries that are using COLLSCAN. This means the query is not using an index and to find the document, Mongo has to loop through the entire collection to check if the document you’re searching for exists.

您需要找到所有使用COLLSCAN的查询。 这意味着查询没有使用索引,而是要查找文档,Mongo必须遍历整个集合以检查要搜索的文档是否存在。

If you’re using Compose.io and are on MongoDB classic then you’ll need to email support to find which queries are using COLLSCAN. If you’re on their MongoDB 3.2 plan then you should be able to find these queries in their dashboard.

如果您使用的是Compose.io,并且使用的是MongoDB classic,则需要通过电子邮件发送支持,以查找哪些查询正在使用COLLSCAN 。 如果您使用他们的MongoDB 3.2计划,那么您应该能够在他们的信息中心找到这些查询。

Also, if you suspect a query is problematic, run the query with explain() and you’ll be able to see how may docs are being scanned. If nscanned is equal to the number of documents in the entire collection, you have a problem and need an index. One bad index can massively affect your entire database since it will lock it down for all queries.

另外,如果您怀疑查询有问题,请使用explain()运行查询,您将能够查看文档的扫描方式。 如果nscanned等于整个集合中的文档数,则您有问题,需要索引。 一个坏索引会严重影响整个数据库,因为它会锁定所有查询。

第2课:流星实例过多是一个问题! (Lesson 2: Having too many Meteor instances is a problem!)

So once you learn how to scale to multiple instances, you hope it’s the end of all the scaling misery. Alas, this is not the case. And adding too many servers will hurt performance at a certain point.

因此,一旦您了解了如何扩展到多个实例,您就希望这是所有扩展痛苦的终结。 ,事实并非如此。 而且添加过多的服务器会在一定程度上损害性能。

This is because Mongo uses additional RAM for each connection to the database. I must have had around 60–70 instances connected to my database at some point, and Mongo did not like it, nor did I need that many. The Meteor instances weren’t the bottleneck for performance.

这是因为Mongo会为与数据库的每个连接使用额外的RAM。 在某个时候,我必须已将大约60–70个实例连接到我的数据库,而Mongo不喜欢它,我也不需要那么多。 流星实例不是性能的瓶颈。

You can give Mongo more RAM of course, but just be wary of what happens when you keep adding more servers. You might be taking the load off of each Meteor instance, but you’re adding load to Mongo creating a new bottleneck.

您当然可以为Mongo提供更多RAM,但是请注意在不断添加更多服务器时会发生什么。 您可能正在减轻每个Meteor实例的负载,但是您正在向Mongo添加负载,从而创建了新的瓶颈。

第3课:不用担心扩展Nginx (Lesson 3: Don’t worry about scaling Nginx)

One thing I was worried about this summer was that Nginx would be my bottleneck. This will rarely be the case. Nginx should be able to handle thousands of concurrent users without a problem.

我今年夏天担心的一件事是Nginx将成为我的瓶颈。 这种情况很少发生。 Nginx应该能够处理成千上万的并发用户而不会出现问题。

I did speak to a company that was having troubles with Nginx a few months ago. They had to handle a couple of thousand concurrent connections. You can read this article for some more tips about optimising Nginx for high traffic loads.

我确实和几个月前与Nginx遇到麻烦的一家公司交谈过。 他们必须处理数千个并发连接。 您可以阅读本文,以获得有关针对高流量负载优化Nginx的更多提示。

Some highlights from the article that are worth using immediately:

本文中的一些重点值得立即使用:

Turn off access logs:

关闭访问日志:

By default nginx will write every request to a file on disk for logging purposes, you can use this for statistics, security checks and such, however it comes at the cost of IO usage. If you don’t use access logs for anything you can simply just turn it off and avoid the disk writes.

默认情况下,nginx会将每个请求写入磁盘上的文件以进行日志记录,您可以将其用于统计信息,安全检查等,但是这是以IO使用为代价的。 如果您不对访问日志进行任何操作,则只需将其关闭并避免磁盘写入。

Worker processes and connections:

工作进程和连接:

Worker Processes

工人流程

The worker process is the backbone of nginx, once the master has bound to to the required IP/ports it will spawn workers as the specified user and they’ll then handle all the work. Workers are not multi-threaded so they do not spread the per-connection across CPU cores. Thus it makes sense for us to run multiple workers, usually 1 worker per CPU core. For most work loads anything above 2–4 workers is overkill as nginx will hit other bottlenecks before the CPU becomes an issue and usually you’ll just have idle processes. If your nginx instances are CPU bound after 4 workers then hopefully you don’t need me to tell you.

工作进程是nginx的基础,一旦主服务器绑定到所需的IP /端口,它将以指定的用户身份产生工作程序,然后他们将处理所有工作。 工作程序不是多线程的,因此它们不会在CPU内核之间分散每个连接。 因此,对于我们来说,运行多个工作程序(每个CPU内核通常只有1个工作程序)是有意义的。 对于大多数工作负载,超过2-4个工作人员的任何事务都是过大的,因为nginx在CPU成为问题之前会遇到其他瓶颈,通常您只会有空闲的进程。 如果您的nginx实例在4个工作线程之后受CPU限制,那么希望您不需要我告诉您。

An argument for more worker processes can be made when you’re dealing with situations where you have a lot of blocking disk IO. You will need to test your specific setup to check the waiting time on static files, and if it’s big then try to increase worker processes.

当您要处理大量阻塞磁盘IO的情况时,可以提出更多工作进程的观点。 您将需要测试特定的设置,以检查静态文件的等待时间,如果该时间很大,则尝试增加辅助进程。

Worker Connections

工人连接

Worker ConnectionsWorker connections effectively limits how many connections each worker can maintain at a time. This directive is most likely designed to prevent run-away processes and in case your OS is configured to allow more than your hardware can handle. As nginx developer Valentine points out on the nginx mailing listnginx can close keep-alive connections if it hits the limit so we don’t have to worry about our keep-alive value here. Instead we’re concerned with the amount of currently active connections that nginx is handling. The formula for maximum number of connections we can handle then becomes:

工作程序连接 工作程序连接有效地限制了每个工作程序一次可以维护的连接数。 该指令最有可能被设计为防止进程失控,并且如果您的OS配置为允许硬件无法承受的范围之外的话。 正如nginx开发人员Valentine在nginx邮件列表上指出的那样,nginx可以在达到极限时关闭保持活动的连接,因此我们不必担心这里的保持活动值。 相反,我们担心的是nginx正在处理的当前活动连接数。 这样,我们可以处理的最大连接数的公式变为:

worker_processes * worker_connections * (K / average $request_time)

worker_processes * worker_connections *(K /平均$ request_time)

Where K is the amount of currently active connections. Additionally, for the value K, we also have to consider reverse proxying which will open up an additional connection to your backend.

其中K是当前活动连接的数量。 此外,对于值K,我们还必须考虑反向代理,这将为您的后端打开一个额外的连接。

In the default configuration file the worker_connections directive is set to 1024, if we consider that browsers normally open up 2 connections for pipe lining site assets then that leaves us with a maximum of 512 users handled simultaneously. With proxying this is even lower, though, your backend hopefully responds fairly quickly to free to the connection.

在默认配置文件中,worker_connections指令设置为1024,如果我们认为浏览器通常为管道内衬站点资产打开2个连接,则最多可以同时处理512个用户。 但是,使用代理时,这甚至更低,您的后端有望相当快地响应以释放连接。

All things considered about worker connections it should be fairly clear that if you grow in traffic you’ll want to eventually increase the amount of connections each worker can do. 2048 should do for most people but honestly, if you have this kind of traffic you should not have any doubt how high you need this number to be.

考虑到所有与工作人员连接有关的事情,应该很清楚,如果流量增加,您最终将希望增加每个工作人员可以进行的连接数量。 2048应该适合大多数人,但老实说,如果您有这种流量,则毫无疑问您需要这个数字有多高。

第4课:断开空闲用户的连接 (Lesson 4: Disconnect idle users)

This one is important! I don’t why this isn’t a bigger thing in the Meteor community!

这一点很重要! 我不明白为什么这在Meteor社区中不是一件大事!

Disconnect users when they’ve just left their tab open. It’s so simple to do and saves precious resources.

当用户刚刚打开标签页时,断开他们的连接。 操作非常简单,可以节省宝贵的资源。

To disconnect automatically you can use this package: mixmax:smart-disconnect.

要自动断开连接,您可以使用以下软件包: mixmax:smart-disconnect。

第5课:威尔·格里格斯着火了 (Lesson 5: Will Griggs is on fire)

If you got this far in the post, you’re probably feeling super inspired and in the mood for a football chant. I present you with Will Griggs:

如果您能在职位上做到这一点,那么您可能会感到超级受启发,并且渴望足球比赛。 我向您介绍Will Griggs:

There wasn’t actually a point here. It just seemed like the appropriate thing to write at this point in the article. But if we actually want to learn a lesson from it then here goes:

这里实际上没有一点。 在本文的这一点上看来,这似乎是适当的事情。 但是,如果我们真的想从中吸取教训,那就去:

If you’re a solo developer, and you have thousands of people relying on your app to work right now, things can get stressful. My advice to you (and to myself): calm the heck down. Listen to some Will Griggs on Fire. Hopefully you’ll work it out, and even if things mess up, it’s probably not as bad as you think.

如果你是一个独奏开发人员,你有成千上万的人依赖于你的应用程序的工作,现在 ,情况可能会变得紧张。 我对您(以及对我自己)的建议:冷静下来。 听一些威尔·格里格斯着火。 希望您能解决这个问题,即使事情变得混乱,它也可能没有您想像的那么糟糕。

Pokemon Go was pretty awful at the start. Servers were constantly overloaded, but people kept coming back to play. From a business perspective, Niantic still made a killing. The hype has now died down, but that has nothing to do with their scaling issues, or the many early bugs. It’s just the end of the fad.

口袋妖怪围棋一开始就非常糟糕。 服务器不断超负荷运行,但人们仍在继续玩游戏。 从业务角度来看,Niantic仍然是杀人犯。 炒作现在已经消失了,但这与他们的扩展问题或许多早期错误无关。 这只是时尚的终结。

So the life lesson, listen to Will Griggs when you’re stressed out.

所以生活课,当您压力很大时,听听威尔·格里格斯。

第6课:从他们如何缩放Pokemon Go中获得启示 (Lesson 6: Take a cue from how they scaled Pokemon Go)

On the topic of Pokemon Go, lets talk a bit about what happened. Firstly, Pokemon Go will not happen to you. Pokemon Go had a strong team of ex-Googlers that knew how to do deal with enormous loads, but even they got caught out with the popularity of the app. They were ready for a big load, but not or a load the size of Twitter.

在神奇宝贝Go的主题上,让我们谈谈发生的事情。 首先,Pokemon Go不会发生在您身上。 Pokemon Go拥有一支强大的前Google员工团队,他们知道如何处理巨大的负载,但即使他们也被该应用程序的普及所吸引。 他们已经准备好承受很大的负担,但还没有承受Twitter大小的负担。

Some apps around Pokemon Go also popped up. Pokemon Go chat apps, and Pokemon Go Instagrams started popping up and became very popular, very quickly with a million users in a matter of days. Some of these apps were developed by solo developers and handling the load was a challenge for them.

口袋妖怪围棋的一些应用程序也弹出了。 Pokemon Go聊天应用程序和Pokemon Go Instagram开始出现并Swift流行,并在短短几天内Swift吸引了100万用户。 其中一些应用程序是由单独的开发人员开发的,处理负载对他们来说是一个挑战。

There’s this article about how someone built a Pokemon Go Instagram app with 500,000 users in 5 days and ran it on a $100 per month server. That’s impressive. And the takeaway from the article is that you can build a quick MVP that scales if you know what you’re doing.

这篇文章是关于某人如何在5天内构建具有500,000用户的Pokemon Go Instagram应用并在每月100美元的服务器上运行它的。 这很让人佩服。 从本文中得出的结论是,如果您知道自己在做什么,则可以构建一个可扩展的快速MVP。

If you can do that, that’s definitely great, but if you’re a young and inexperienced developer that may not be possible. I would recommend to go and build your dream app and not worry too much about what happens when you need to scale.

如果可以做到,那绝对很棒,但是如果您是一个年轻且没有经验的开发人员,那可能是不可能的。 我建议您去构建自己的理想应用程序,不要太担心在需要扩展时会发生什么。

If you can build things the right way from the get go that’s definitely a plus and it’s definitely worth asking more experienced developers for advice to try and get things right from the start. But don’t let the fear of scaling hold you back from creating your dream app. The harsh reality is that people probably won’t like your app and it would be impressive if you can get 100 people to use it.

如果您能够从一开始就以正确的方式构建事物,那绝对是一个加分,绝对值得向更有经验的开发人员寻求建议,以从一开始就尝试正确地构建事物。 但是,不要因担心扩展而无法创建理想的应用程序。 残酷的现实是人们可能会不喜欢您的应用程序,如果您可以让100个人使用它,将会给人留下深刻的印象。

But following the principles of the lean startup, it’s better to build something, get it into the hands of real users, and get feedback, than to never launch due to fear of not being able to deal with a heavy load.

但是遵循精益创业的原则,最好是构建一些东西,将其交到真实用户的手中,并获得反馈,而不是因为担心无法处理繁重的工作而从不启动。

展望未来 (Looking Ahead)

These episodes dealing with scaling have been a burden and ideally I would have preferred not to have to deal with these issues. It would be great if things just worked and you could push off scaling issues as long as possible. Because of this I’ve started looking at other platforms that handle scale better.

这些涉及扩展的情节一直是一个负担,理想情况下,我宁愿不必处理这些问题。 如果一切正常,那您就可以了,您可以尽可能地推迟扩展问题。 因此,我开始研究其他可更好地处理规模的平台。

One such platform is Elixir which is built on Erlang. Erlang is what Whatsapp uses and allowed a team of 35 engineers to scale to 450 million users! Even today, with close to 1 billion users, Whatsapp has a team of only 50 engineers! That’s pretty incredible and you can read more here. How did they achieve such awesome scale for a real time app with so few people? The answer is Erlang. And today you can utilize the power of Erlang with Phoenix Framework and Elixir. We’re still using Meteor, but some aspects of the app I am considering moving over to Elixir which will enable us to hit large scale live updates.

这样的平台之一就是在Erlang上构建的Elixir。 Erlang是Whatsapp使用的工具,它使35名工程师组成的团队可以扩展到4.5亿用户! 即使在今天,拥有近10亿用户的Whatsapp团队也只有50名工程师! 那真是不可思议,您可以在这里内容。 他们如何在这么少的实时应用程序中实现如此出色的规模? 答案是Erlang。 今天,您可以在Phoenix Framework和Elixir中利用Erlang的功能。 我们仍在使用Meteor,但我正在考虑将该应用程序的某些方面转移到Elixir,这将使我们能够进行大规模的实时更新。

I’d also take a look at Apollo which will work with Meteor or any Node.js server. Apollo will help you scale Meteor, because you don’t need every single publication to be reactive when using Apollo (which are the biggest drain on server CPU for Meteor apps.) You can achieve a similar result using Meteor methods to send data instead of publications.

我还要看一下将与Meteor或任何Node.js服务器一起使用的Apollo。 Apollo将帮助您扩展Meteor,因为使用Apollo时并不需要每个发布都具有响应性(这是Meteor应用程序服务器CPU上最大的消耗。)您可以使用Meteor方法发送数据而不是发送数据来达到类似的结果。出版物。

One last point is that despite many influential Meteor developers recently leaving the community, there have been some developments on the scaling front with regards to scaling. Check out the redis-oplog package and discussion for more. It’s a very new package and I’d still say it’s in beta from my little experience playing with it a week ago.

最后一点是,尽管最近有许多有影响力的Meteor开发人员离开了社区,但在扩展方面仍存在一些扩展方面的开发。 查看redis-oplog软件包和更多讨论 。 这是一个非常新的软件包,根据我一周前使用它的小经验,我仍然会说它是beta版。

If you enjoyed this post, give it a heart, and if you want keep up with the latest in inspirational scaling articles, give me a follow.

如果您喜欢这篇文章,请给它一个心,如果您想了解最新的鼓舞人心的缩放文章,请跟随我。

翻译自: https://www.freecodecamp.org/news/scaling-meteor-a-year-on-26ee37588e4b/

meteor从入门到精通