Hydrogen and oxygen burn explosively. It wouldn’t last long
Hydrogen and oxygen burn explosively. It wouldn’t last long
Who has ever thought carbon capture was a long term solution has never studied basic science, I guess.
Carbon capture has its uses, but selling it as long term solution is clearly just a way used by lobbyist that exists just because politicians are usually very ignorant on everything other than politics
It requires continuous expansive improvements. It is like real world. Building a system robust to frauds works on the short term, but on the mid and long term is impossibile. That is why laws change, evolve, we have governments and so on. Because system reacts to your rules and algorithms, making them less effective.
And these continous expensive improvements are done daily, but it is a difficult job
Here it goes your chances to own a home. I am sorry :(
It is not at the moment. Models are built on the assumption of stability, i.e. that what they are modelling doesn’t change over time, doesn’t evolve. This is clearly untrue, and cheating is a way the environment evolves. Only way to consider that, is to create a on-line continous learning algorithm. Currently this exists and is called reinforcement learning. Main issue is that methods to account for an evolving environment are still under active research. In the sense that methods to address this issue are not yet available.
It is an extremely difficult task tbf
Out of curiosity what model did you use?
Do you have examples? It should only happen in case of overfitting, i.e. too many identical image for the same subject
They don’t remove bugs, but it is easier to solve them without having to wait for some random guy to answer on stack overflow.
I don’t know now (I haven’t asked a question in ages) but to get a good answer on stack overflow it used to take weeks sometimes
GitHub issues are usually more useful
People isn’t considering that documentation has greatly improved over time, languages and frameworks have become more abstract, user-friendly, modern code is mostly self explanatory, good documentation has become the priority of all open source projects, well documented open source languages and frameworks have become the norm.
Less people asking programming related questions can be explained by programming being an easier and less problematic experience nowadays, that is true.
Even more “we’ll decide if you are worthy to get my data”
I believe he is talking about secure boot
I read it, and I read the messages from the devs. The communication issue I am trying to point is also highlighted in the comments: if the decision on merging a PR is uniquely dictated by financial benefits of IBM, ignoring the broader benefits of the community, the message is that red hat is looking for free labor and it is not really interested in anything else. Which is absolutely the case, as we all know, but writing it down after the recent events is another PR issue, as red hat justified controversial decisions on the lack of contributions from downstream.
The Italian dev tried to put it down as “we have to follow our service management processes that are messy, tedious and expensive” but he didn’t address the problems in the original message. The contributor himself felt like they asked his contribution just to reject it because of purely financial reasons without any additional details. It is a new PR incident
The Apparently is already patch on fedora… Just reporting other comments in this thread. But why do they accept contribution to centos of they don’t want patches that are not economically beneficial to the company? It is a pretty bad message written as this
I stopped recommending it. It is a pity, but there are alternatives
Why would they accept PR at all if they don’t have a robust testing process and approvals are dictated by customers needs?
The message as it is now to potential contributors is that their contribution in not welcome, unless its free labor to financially benefit only ibm.
Which is fair, but the message itself is a new PR issue for red hat
I blocked meme, 196 and shitposting. All is clean now
The problem of current LLM implementations is that they learn from scratch, like taking a baby to a library and telling him “learn, I’ll wait out in the cafeteria”.
You need a lot of data to do so, just to learn how to write, gramma, styles, concepts, relationships without any guidance.
This strategy might change in the future, but the only solution we have now is to refine the model afterward, let’s say.
Tbf biases are integral part of literature and human artistic production. Eliminating biases means having “boring” texts. Which is fine for me, but a lot of people will complain that AI is dumb and boring
But it is for wifi communication apparently. Unfortunately short wave lengths are absorbed more easily than longer wave lengths as the current radio/microwave solutions. That is the main physical limitations to overcome
Unfortunately that is not the case. Closed sourced software for small communities are not safer. My company had an incredibly embarrassing data leak because they outsourced some work and trusted a software used also by the competitors. Unfortunately the issue was found by one of our customers and ended up on the newspapers.
Absolutely deserved, but still, closed sourced stuff is not more secure
deleted by creator