• 0 Posts
  • 33 Comments
Joined 1 year ago
cake
Cake day: July 2nd, 2023

help-circle

  • For the sake of roleplay and being friends, the idea of disabled people in fantasy settings should not be difficult to accept, but that doesn’t mean that all fantasy IPs should have all sorts of modern disabilities. Like in a ttrpg you are creating a collaborative story using the ttrpg systems and in that sense heck yeah you can have magic chairs to transport otherwise disabled people. BG3 straight up cures blindness by use of a magical prosthetic eye, so there is even precedent for it in the popular dnd video game.

    But what I totally want is some more creative and magical ways to handle disabilities, or maybe just whimsical. What about a druid that wildshapes into a snake to move around, and just slithers on the ground. straight up never uses a wheelchair cuz snek. Or magical leg armor. Prosthetic eyes? why not just have a large crystal ball that balances on your head that does the seeing for you.



  • garyyo@lemmy.worldtoRisa@startrek.websiteSpace is 2D, right?
    link
    fedilink
    English
    arrow-up
    19
    arrow-down
    1
    ·
    8 months ago

    Realistically their does need to be some consideration but the medium they travel isn’t air, but the occasional speck of dust, hydrogen atom, and other small stuff. It’s not much but for interstellar travel there are still considerations needed, namely reducing your cross sectional area in the direction of travel. Long and thin gives you less drag since it hits less stuff.

    Regardless the airplane looks doesn’t make much sense anyway :)


  • garyyo@lemmy.worldtoRisa@startrek.websiteSpace is 2D, right?
    link
    fedilink
    English
    arrow-up
    29
    arrow-down
    2
    ·
    8 months ago

    Actually, space in general is mostly 2 dimensional, in that all the interesting stuff generally takes place on some sort of almost flat plane. A star system is generally on a plane, so is the galactic system, and for most planet+moons too. They just tend to be different planes so for ease of communication you will probably just align your idea of down with whatever the most convenient plane is. This of course is ignoring what gravity down is, as that changes as thrust does.

    And as for ship alignment, yeah no one is going to worry about that till its time to dock, at which point the lighter vessel will likely change their orientation since its easier and takes less energy. Spaceships are not going to be within human sight range of each other most of the time, even being in relatively the same are. Space too big and getting ships close to each other is dangerous!

    But in media that fucks with people’s idea of meeting and seeing each other so for convenience of not confusing the audience you don’t see that level of realism often.


  • garyyo@lemmy.worldtoRisa@startrek.websiteSpace is 2D, right?
    link
    fedilink
    English
    arrow-up
    14
    ·
    edit-2
    8 months ago

    In more realistic scenarios, “down” is just defined by the direction of thrust. So approaching a ship, they will be down assuming you are decelerating to match their velocity, but they will be up if you are still thrusting towards them.

    But all of that has almost nothing to do with how people will think of orientation to other ships since generally speaking you won’t be using eye sight to communicate ship to ship. At that point an agreed upon down will be needed. Probably aligned with galactic or star system to establish a plane, and probably right hand rule to establish up and down. In general given that space is big and ships are small they will just be points on each others radar until they need to dock with each other so it doesn’t really matter how people are actually oriented, as long as when they communicate what they say makes sense to the other side.

    edit: or maybe down is towards the currently orbitted gravity well, like towards a planet/moon/star.


  • garyyo@lemmy.worldtoScience Memes@mander.xyzI have attempted science.
    link
    fedilink
    English
    arrow-up
    40
    arrow-down
    1
    ·
    10 months ago

    Thats how its supposed to work and in practice it kinda does, but the people with the money want positive results and the people doing the work have to do what they can to stay alive and relevant enough to actually do the work. Which means that while most scientists are willing to change their minds about something once they have sufficient evidence, gathering that evidence can be difficult when no one is willing to pay for it. Hard to change minds when you can’t get the evidence to show some preconceived notion was wrong.










  • Always has been. The laws are there to incentivize good behavior, but when the cost of complying is larger than the projected cost of not complying they will ignore it and deal with the consequences. For us regular folk we generally can’t afford to not comply (except for all the low stakes laws that you break on a day to day basis), but when you have money to burn and a lot is at stake, the decision becomes more complicated.

    The tech part of that is that we don’t really even know if removing data from these sorts of model is possible in the first place. The only way to remove it is to throw away the old one and make a new one (aka retraining the model) without the offending data. This is similar to how you can’t get a person to forget something without some really drastic measures, even then how do you know they forgot it, that information may still be used to inform their decisions, they might just not be aware of it or feign ignorance. Only real way to be sure is to scrap the person. Given how insanely costly it can be to retrain a model, the laws start looking like “necessary operating costs” instead of absolute rules.




  • The real AI, now renamed AGI, is still very far

    The idea and name of AGI is not new, and AI has not been used to refer to AGI since perhaps the very earliest days of AI research when no one knew how hard it actually was. I would argue that we are back in those time though since despite learning so much over the years we have no idea how hard AGI is going to be. As of right now, the correct answer to how far away is AGI can only be I don’t know.


  • Five years ago the idea that the turing test would be so effortlessly shattered was considered a complete impossibility. AI researchers knew that it was a bad test for AGI, but to actually create an AI agent that can pass it without tricks still was surely at least 10-20 years out. Now, my home computer can run a model that can talk like a human.

    Being able to talk like a human used to be what the layperson would consider AI, now it’s not even AI, it’s just crunching numbers. And this has been happening throughout the entire history of the field. You aren’t going to change this person’s mind, this bullshit of discounting the advancements in AI has been here from the start, it’s so ubiquitous that it has a name.

    https://en.wikipedia.org/wiki/AI_effect