There's simply too many people, but we're trying our best. 🙂
-
Identified
Just saw a burst of users coming online, attempting to re-scale. Update: Scaled, monitoring performance, we may have to go Update: Trying to push throughput even further Update: We're at an architectural limit, going to temporarily disable some events (incl. typing indicators and user updates) to help ease congestion while an actual fix is being put together
-
Identified
I suspect we are hitting limits with our message pubsub, a solution is being put together. Update: Scaled vertically for now.
-
Monitoring
Production services are now scaled up. There is a possibility of hitting further bottlenecks but we should be okay for a moment. Will continue to monitor and improve the deployment pattern.
-
Identified
Ordered more servers, waiting for fulfillment. Service is generally stable right now, but more load is expected either today or tomorrow peak hours.
-
Identified
Single-node cluster deployment was successful, now scaling it up.
-
Identified
Deployment is taking a little longer than expected, but I would expect this to take less than an hour to resolve.
-
Identified
We are currently working on scaling up our services.