• 290 Posts
  • 1.92K Comments
Joined 2 years ago
cake
Cake day: July 7th, 2023

help-circle
  • Not only is this inaccurate, it still doesn’t make sense when you’re talking about a bipedal manufacturing robot.

    Like motion capture, all you need to capture from remote operation of the unit is the input articulation from the operator, which is then translated into acceptable operation movements on the unit with input from its local sensors. The majority of these things (if using pre-cap operating data) is just trained on iterative scenarios and retrained for major environmental changes. They don’t use tele-operation live because it’s inherently dangerous and takes a lot of the local sensor inputs offline for obvious reasons.

    OC is saying what all Robotics Engineers have been saying about these bipedal “PR Bots” for years: the power and effort to simply make these things walk is incredibly inefficient, and makes no sense in a manufacturing setting where they will just be doing repetitive tasks over and over.

    Wheels move faster than legs, single purpose mechanisms will be faster and less error-prone, and actuation takes less time to train.

















  • Here’s their plan:

    1. Claim open investigations to not release certain files
    2. Stall for the holidays
    3. When someone calls yet another referendum or forces testimony in Congress again…stall
    4. Someone in Congress finally admits the files released are not complete because they have seen the the unredacted versions
    5. Stall again

    They will ratchet up all the bullshit pain they are inflicting on Americans through ICE as much as they possibly can in this time, and try and force Representatives to back off any further action until they relent.






  • From your own linked paper:

    To design a neural long-term memory module, we need a model that can encode the abstraction of the past history into its parameters. An example of this can be LLMs that are shown to be memorizing their training data [98, 96, 61]. Therefore, a simple idea is to train a neural network and expect it to memorize its training data. Memorization, however, has almost always been known as an undesirable phenomena in neural networks as it limits the model generalization [7], causes privacy concerns [98], and so results in poor performance at test time. Moreover, the memorization of the training data might not be helpful at test time, in which the data might be out-of-distribution. We argue that, we need an online meta-model that learns how to memorize/forget the data at test time. In this setup, the model is learning a function that is capable of memorization, but it is not overfitting to the training data, resulting in a better generalization at test time.

    Literally what I just said. This is specifically addressing the problem I mentioned, and goes on further to exacting specificity on why it does not exist in production tools for the general public (it’ll never make money, and it’s slow, honestly). In fact, there is a minor argument later on that developing a separate supporting system negates even referring to the outcome as an LLM, and the supported referenced papers linked at the bottom dig even deeper into the exact thing I mentioned on the limitations of said models used in this way.