• 0 Posts
  • 20 Comments
Joined 2 years ago
cake
Cake day: July 2nd, 2023

help-circle

  • No, but I don’t think that was ever the plan. The counteroffensive can succeed in a few different ways - cutting off supply lines to the occupied areas in the south, attriting Russian forces enough to force another unpopular round of conscription, or a major breakthrough like the Kharkiv offensive - but none of them are war-ending by themselves. It’s possible that Russia has a sudden change of heart and withdraws if it suffers some particularly tough kind of loss or Putin dies, but it seems very unlikely to me.

    Sending Ukraine things like more modern jets and tanks has only started this year, which is clearly an intention to continue supporting Ukraine for a while. Nobody’s sending someone a fighter jet if they think it’ll no longer be needed in a month or two, after all









  • Yes, carbon sequestration is the term for it, but none of them are currently practical to do on a scale that would mitigate the effects of the fossil fuels we burn. Growing trees is an example of this, as they do lock up carbon in the process of growing, but they’re kind of a risky prospect since if the tree dies and rots or is caught in a wildfire then it releases the carbon again. Another option is literally just sticking it back underground in mines or oil wells, but of course that takes a lot of energy to do and then whole point of burning fossil fuels is to get energy so this one is currently a bit self-defeating. They’re things that might be helpful to do if we succeed in transitioning to clean energy and have an excess of it available









  • Surely it can’t just be a case of the LM saying a hard yes or no to any question of “is this prime” with the data they have, though? The results are a significant majority one way or the other in each case, but never 100%. Of the 500 each time, GPT3.5 has 37 answers go against the majority in March and 66 in July. That doesn’t seem like a hard one answer to any primality query to me, though that does come with the caveat that I’m by no means actually well studied on the topic


  • That explanation of the prime number thing doesn’t seem to actually match what’s in the paper. GPT4 goes from a wordy explanation of how it arrived at the correct answer, “yes”, to a single-word incorrect “no”. GPT3.5 goes from a wordy explanation that has the right chain of thought but the wrong answer “no” to a very wordy explanation with the correct answer “yes”. Neither of those seem to be predicated on either of the models just answering one way for everything.