There was a service degradation of our reference server.
Read more...There was a service degradation of our reference server.
Read more...In our efforts to add a distribution (Debian 12), we accidentally deleted an entry in our architecture database table. That made all attempts to fetch this architecture through associations crash.
Read more...After implementing the feature that allows the users to comment on specific lines in the changes of a request, we also introduced the possibility to notify about them. However, there was a corner case which caused the notifications to fail, so our reference server did not send any web notifications or emails during the Easter holidays (from 6th to 11th April).
Read more...Some maintenance operations plus a missing configuration rollback caused a 20 minutes certificate error on our reference server.
Read more...Our reference server stopped sending out email notifications on February 3th, 2023. In the lines below you will find a detailed explanation of what happened.
Read more...Some maintenance operations caused a long downtime on our reference server. In the lines below you will find a detailed explanation of what happened.
Read more...After yesterday’s deployment, we faced a downtime on our reference server. In the lines below you will find a detailed explanation of what happened.
Read more...After yesterday’s deployment, we faced a downtime on our reference server. We want to share with you a detailed explanation of what happened.
Read more...There was a severe service degradation of our reference server. On 2022-05-31 a deployment of OBS failed and led to a downtime. We want to give you some insight into what happened.
Read more...There was a severe service degradation of our reference server. On 2022-05-18 the events system of OBS started to fail, causing problems in different areas of the application. We want to give you some insight into what happened.
Read more...There was a severe service degradation of our reference server. On 2022-05-10 a deployment of OBS failed and led to a downtime. We want to give you some insight into what happened.
Read more...There was a severe service degradation of our reference server. We want to give you some insight into what happened.
Read more...There was a severe service degradation of our reference server. We want to give you some insight into what happened.
Read more...The https://build.opensuse.org OBS instance was unavailable for eleven minutes, outside of our maintenance window. We want to give you some insight into what happened.
Read more...After today’s deployment, our reference server suffered from a severe service degradation. We want to give you some insight into what happened.
Read more...After today’s deployment, we faced a downtime of our reference server for users in our beta program. We want to give you some insight into what happened.
Read more...After today’s deployment, we faced a downtime of our reference server. We want to give you some insight into what happened.
Read more...After today’s deployment we faced a downtime of our reference server. We want to give you some insight into what happened.
Read more...After today’s deployment we faced a downtime of our reference server. We want to give you some insight into what happened.
Read more...During yesterday’s deployment we faced some issues. We want to give you some insight into what happened.
Read more...During today's deployment we faced some issues. We had to disable rabbitmq support in build.opensuse.org for some hours.
Read more...During yesterday's deployment we faced some issues. We had to monkey patch some fixes and we want to give you some insight into what happened.
Read more...During deployment, we were facing some issues and build.opensuse.org was not accessible for a couple of minutes.
This sucks and that's why we want to give you some insight in what happened.
Read more...We did it again! Yesterday, on 19th of July 2017, we had an extended deployment time because of an issue during the deployment. Though this time it "only" took 15 minutes;-)
This sucks and that's why we want to give you some insight in what happened.
Read more...On June 30, 2017 we had an extended deployment time of roughly 45 minutes for our reference server because of a couple of problems with one of the data migrations. We implemented a new feature, user notifications via RSS, that included a migration of data in our database. This migration was broken, causing this deployment to go terribly wrong.
The frontend team afterward met to do a post-mortem to identify the problems, solutions and possible take aways for the future. This is the first post-mortem meeting we held, hopefully but not likely the last. Here goes the report.
Read more...