PM_ME_VINTAGE_30S [he/him]

Anarchist, autistic, engineer, and Certified Professional Life-Regretter. I mosty comment bricks of text with footnotes, so don’t be alarmed if you get one.

You posted something really worrying, are you okay?

No, but I’m not at risk of self-harm. I’m just waiting on the good times now.

Alt account of [email protected]. Also if you’re reading this, it means that you can totally get around the limitations for display names and bio length by editing the JSON of your exported profile directly. Lol.

  • 1 Post
  • 180 Comments
Joined 1 year ago
cake
Cake day: July 9th, 2023

help-circle
  • A deep neural adaptive PID controller would be a bit overkill for a simple robot arm, but for say a flexible-link robot arm it could prove useful. They can also work as part of the controller for systems governed by partial differential equations, like in fluid dynamics. They’re also great for system identification, the results of which might indicate that the ultimate controller should be some “boring” algorithm.


  • Since I don’t feel like arguing

    I’ll try to keep this short then.

    How will these reasonable AI tools emerge out of this under capitalism?

    How does any technology ever see use outside of oppressive structures? By understanding it and putting to work on liberatory goals.

    I think that crucial to working with AI is that, as it stands, the need for expensive hardware to train it makes it currently a centralizing technology. However, there are things we can do to combat that. For example, the AI Horde offers distributed computing for AI applications.

    And how is it not all still just theft with extra steps that is imoral to use?

    We gotta find datasets that are ethically collected. As a practitioner, that means not using data for training unless you are certain it wasn’t stolen. To be completely honest, I am quite skeptical of the ethics of the datasets that the popular AI products were trained on. Hence why I refuse to use those products.

    Personally, I’m a lot more interested in the applications to robotics and industrial automation than generating anime tiddies and building chat bots. Like I’m not looking to convince you that these tools are “intelligent”, merely useful. In a similar vein, PID controllers are not “smart” at all, but they are the backbone of industrial automation. (Actually, a proven use for “AI” algorithms is to make an adaptive PID controller so that’s it can respond to changes in the plant over time.)



  • Disagree. The technology will never yield AGI as all it does is remix a huge field of data without even knowing what that data functionally says.

    We definitely don’t need AGI for AI technologies to be useful. AI, particularly reinforcement learning, is great for teaching robots to do complex tasks for example. LLMs have shocking ability relative to other approaches (if limited compared to humans) to generalize to “nearby but different, enough” tasks. And once they’re trained (and possibly quantized), they (LLMs and reinforcement learning policies) don’t require that much more power to implement compared to traditional algorithms. So IMO, the question should be “is it worthwhile to spend the energy to train X thing?” Unfortunately, the capitalists have been the ones answering that question because they can do so at our expense.

    For a person without access to big computing resources (me lol), there’s also the fact that transfer learning is possible for both LLMs and reinforcement learning. Easiest way to explain transfer learning is this: imagine that I want to learn Engineering, Physics, Chemistry, and Computer Science. What should I learn first so that each subject is easy for me to pick up? My answer would be Math. So in AI speak, if we spend a ton of energy to train an AI to do math and then fine-tune agents to do Physics, Engineering, etc., we can avoid training all the agents from scratch. Fine-tuning can typically be done on “normal” computers with FOSS tools.

    all it does is remix a huge field of data without even knowing what that data functionally says.

    IMO that can be an incredibly useful approach for solving problems whose dynamics are too complex to reasonably model, with the understanding that the obtained solution is a crude approximation to the underlying dynamics.

    IMO I’m waiting for the bubble to burst so that AI can be just another tool in my engineering toolkit instead of the capitalists’ newest plaything.

    Sorry about the essay, but I really think that AI tools have a huge potential to make life better for us all, but obviously a much greater potential for capitalists to destroy us all so long as we don’t understand these tools and use them against the powerful.







  • PM_ME_VINTAGE_30S [he/him]@lemmy.sdf.orgtoMemes@lemmy.mlMath
    link
    fedilink
    English
    arrow-up
    1
    ·
    edit-2
    20 days ago

    Sounds like fun! I’m going to bed soonish but I’m willing to answer questions about multivariable calculus probably when I wake up.

    When I took multivariable calculus, the two books that really helped me “get the picture” were Multivariable Calculus with Linear Algebra and Series by Trench and Kolman, and Calculus of Vector Functions by Williamson, Crowell, and Trotter. Both are on LibGen and both are cheap because they’re old books. But their real strength lies in the fact that both books start with basic matrix algebra, and the interplay between calculus and linear algebra is stressed throughout, unlike a lot of the books I looked at (and frankly the class I took) which tried to hide the underlying linear algebra.





  • It can use ChatGPT I believe, or you could use a local GPT or several other LLM architectures.

    GPTs are trained by “trying to fill in the next word”, or more simply could be described as a “spicy autocomplete”, whereas BERTs try to “fill in the blanks”. So it might be worth looking into other LLM architectures if you’re not in the market for an autocomplete.

    Personally, I’m going to look into this. Also it would furnish a good excuse to learn about Docker and how SearXNG works.


  • LLMs are not necessarily evil. This project seems to be free and open source, and it allows you to run everything locally. Obviously this doesn’t solve everything (e.g., the environmental impact of training, systemic bias learned from datasets, usually the weights themselves are derived from questionably collected datasets), but it seems like it’s worth keeping an eye on.

    Google using ai, everyone hates it

    Because Google has a long history of doing the worst shit imaginable with technology immediately. Google (and other corporations) must be viewed with extra suspicion compared to any other group or individual because they are known to be the worst and most likely people to abuse technology.

    Literally if Google does literally anything, it sucks by default and it’s going to take a lot more proof to convince me otherwise for a given Google product. Same goes for Meta, Apple, and any other corporations.






  • Infinite-dimensional vector spaces also show up in another context: functional analysis.

    From an engineering perspective, functional analysis is the main mathematical framework behind (1) and (2) in my previous comment. Although they didn’t teach functional analysis for real in any of my coursework, I kinda picked up that it was going to be an important topic for what I want to do when I kept seeing textbooks for it cited in PDE and “signals and systems” books. I’ve been learning it on my own since I finished Calc III like four years ago.

    Such an incredibly interesting and deep topic IMO.