For this week, we all read Detecting DoS Attacks in Microservice Applications: Approach and Case Study. This 2022 jaunt has three really clear research questions. They are:
- RQ1. Which application-level metrics should be monitored to detect DoS attacks? (Number of sockets, number of threads, and percentage of CPU throttled are three standout metrics.)
- RQ2. What way to investigate monitored data on multiple services most increases the success of detecting DoS attacks in a microservice application? (when you look at each service in parallel, in a combined timeline, you detect DoS more easily, surprise surprise.)
- RQ3. Can we use machine learning to detect low and high volume DoS attacks effectively in microservice applications? (yeah, XGBoost was most successful in this field with ~94% accuracy. whether that’s the only thing we should use to detect DoS attacks? open question.)
This paper’s key contributions are also pretty clear: they do their due diligence investigating each question, and then they also have a predictable machine learning implementation based on RQ3.
When reading this, many avenues for future work emerged, so I was surprised when this workshop-size paper didn’t have any future work implications of its own. Anyway, three things I’d like to explore next are as such: First, I want to do more detailed analysis into how the thread metric contributes to DoS. Second, I am curious about how certain measures such as health checks and load balancers amplify DDOS attacks in microservices. I think what I’m most curious about though is how high and low volume DDOS attacks degrade TeaStore, DeathStarBench, TrainTicket, Unguard, and maybe other testbeds. Microservices architecture is already heavily varied, so degradation in one system may not occur the same way degradation in another does.
Presenter: Sarah Abowitz