I’m pretty sure they touch on those points in the paper, they knew they were overloading it and were looking at how it handled that in particular. My understanding is that they’re testing failure modes to try and probe the inner workings to some degree; they discuss the impact of filling up the context in the abstract, mention it’s designed to stress test and are particularly interested in memory limits, so I’m pretty sure they’ve deliberately chosen to not cater to an LLMs ideal conditions. It’s not really a real world use case of LLMs running a business (even if that’s the framing given initially), it’s not just a test to demonstrate capabilities, it’s an experiment meant to break them in a simulated environment. The last line of the abstract kind highlights this, they’re hoping to find flaws to improve the models generally.
Either way, I just meant to point out that they can absolutely just output junk as a failure mode.
Yeah, I get it. I don’t think it is necessarily bad research or anything. I just feel like maybe it would have been good to go into it as two papers:
Look at the funny LLM and how far off the rails it goes if you don’t keep it stable and let it kind of “build on itself” over time iteratively and don’t put the right boundaries on
How should we actually wrap up an LLM into a sensible model so that it can pursue an “agent” type of task, what leads it off the rails and what doesn’t, what are some various ideas to keep it grounded and which ones work and don’t work
And yeah obviously they can get confused or output counterfactuals or nonsense as a failure mode, what I meant to say was just that they don’t really do that as a response to an overload / “DDOS” situation specifically. They might do it as a result of too much context or a badly set up framework around them sure.
I meant they’re specifically not going for that though. The experiment isn’t about improving the environment itself, it’s about improving the LLM. Otherwise they’d have spent the paper evaluating the effects of different environments and not different LLMs.
I’m pretty sure they touch on those points in the paper, they knew they were overloading it and were looking at how it handled that in particular. My understanding is that they’re testing failure modes to try and probe the inner workings to some degree; they discuss the impact of filling up the context in the abstract, mention it’s designed to stress test and are particularly interested in memory limits, so I’m pretty sure they’ve deliberately chosen to not cater to an LLMs ideal conditions. It’s not really a real world use case of LLMs running a business (even if that’s the framing given initially), it’s not just a test to demonstrate capabilities, it’s an experiment meant to break them in a simulated environment. The last line of the abstract kind highlights this, they’re hoping to find flaws to improve the models generally.
Either way, I just meant to point out that they can absolutely just output junk as a failure mode.
Yeah, I get it. I don’t think it is necessarily bad research or anything. I just feel like maybe it would have been good to go into it as two papers:
And yeah obviously they can get confused or output counterfactuals or nonsense as a failure mode, what I meant to say was just that they don’t really do that as a response to an overload / “DDOS” situation specifically. They might do it as a result of too much context or a badly set up framework around them sure.
I meant they’re specifically not going for that though. The experiment isn’t about improving the environment itself, it’s about improving the LLM. Otherwise they’d have spent the paper evaluating the effects of different environments and not different LLMs.