Upgrade issues at work
Jan. 26th, 2018 05:35 pm![[personal profile]](https://www.dreamwidth.org/img/silk/identity/user.png)
Further to Tuesday's journal entry on timesheets, we failed to reproduce the crash on the local production server during yesterday morning's upgrade and we disproved my hypothesis about the previous crash's cause. After its upgrade the server's web front end exhibited a resource exhaustion issue so we downgraded that component. It is unfortunate that we missed this problem in testing. In the UK it is very difficult to ongoingly grant-fund research software projects at all but we do still put significant effort into managing software quality. While I expect us to promptly devise and release a fix it is unusual for us to not catch an issue sooner. We will look at ways to make such mistakes even less likely in future.
I am not much help for this particular unexpected problem: my understanding is that it lies somewhere deep in our session infrastructure that uses ZeroC Ice and Django/WSGI and my own experience with using mainstream technologies for web-based applications is rather more on the side of Spring and JSP.
I am not much help for this particular unexpected problem: my understanding is that it lies somewhere deep in our session infrastructure that uses ZeroC Ice and Django/WSGI and my own experience with using mainstream technologies for web-based applications is rather more on the side of Spring and JSP.