Yeah, that’s too much for me lol
TLDR storing information requires a capacitor that holds a charge which is equivalent to 1 bit of information. 8 bits can hold a number between 0 and 255, for example. The charge’s strength is proportional to area, which is a problem when you shrink the size of the chips. They figured out clever ways to stack the capacitors which kept the charge large enough to be stable but made the 2D area of the capacitor small enough to integrate with the transistors. This was done for RAM first, and it’s now making its way into the CPU’s cache which is kinda a type of quick-access ram that is built into the cpu itself. This allows the cpu to operate faster than the ram, and only check values in the ram on occasion. The theory of having more cache is that the CPU doesn’t have to reference the ram as often, and since the CPU is faster than the RAM this in theory speeds up the process. But it really depends on how the application is implemented and whether it aligns nicely with how the CPU decides to keep information in the cache rather than sending it off to the RAM. That’s why larger cache doesn’t necesarilly make higher performance. In fact, it tends to be lower.
The reason it’s lower is because the cache is stacked as multiple layers, and this makes them more heat sensitive because the heat has to transfer through more layers. To combat the heat issue, the cut back on how much power the CPU gets via reducing the frequency.
They apparently solved the issue of heat transfer by putting the cache on the bottom of the chip. This is for the 9000 series.
Hmm. I hadn’t heard that, but it is interesting. It sounds feasible.
The reason they are heading towards more cache is because they’ve hit a new limit to the size of transistors. So they’ve been battling the size of capacitors but now the transistors can’t go any smaller so they are avoiding the problem in essence. The new innovation that has to happen is a matter of material science instead of industrial engineering. The problems of packing more transistors into a smaller space was mostly an engineering feat that used existing science. Making transistors any smaller requires an advancement in material science, and there are a number of candidates that may pan out. The other obvious solution is to simply make the chips bigger, but the cost of producing high quality silicon wafers that are perfectly smooth is sky high. Standard 12x12" silicon wafers generally cost 3 grand, which is nuts. So if you make CPUs larger, there is going to be a linear cost increase that is hard to eliminate, meaning you could make CPUs that are more powerful, but not more powerful at a given price point.
But heat seems to be the primary issue of stacking layers of transistors and capacitors on top of one another. This means they are probably going to optimize for power consumption in the future, and circuits that use less power will be easier to stack and, if the extra space of the stacking outweighs the reduction of speed from power optimization, then it might increase CPU power for a bit, but probably has a limit to it as well. Pretty soon they are going to be requiring direct-die water blocks just as a standard cooling method. They can keep CPU prices low because the consumer pays for the cooling device. In fact, they’ve been doing this for awhile. They stopped selling heatsinks with their CPUs back in like 2010 or something like that. They know that the CPU needs very good cooling to reach the figures they advertise and that most people won’t buy adequate cooling, but it allows them to sell CPUs at a lower price.
Ironically, the best CPU water block is an ultra-thin 0.00001 mm layer of copper with water sprayed onto it at 10,000 miles per hour. But there are limitations to how thin the copper can be and how fast the water can move, so it’s a game of balancing the distance from the CPU to the water in combination with maximizing the surface area between the water and the copper. There is one other parameter and that’s the resistance to flow inside the copper block but you can overcome that with a big enough pump so in theory you can ignore it and just optimize the other two factors.
Cham needs a new PC since his current one is flying out the window as we speak. Wasn’t he supposed to be some creative genius or something? The dude played roach-ape zvz and died ezpz to mass swarm host.
It’s pretty clear EU isn’t the gold standard of SC2 anymore because 6k EU is weaker than 5k NA. It was honestly a much easier game than most 5200 NA zergs. EU GMs are so used to spamming APM they are like a deer in headlights as soon as someone deviates from standard play. They have no clue what to do. It’s like they’ve never played SC2 before if they have to think on their feet.
Watch as he realizes the slow creep of MMR-obliteration can reach his nat’s hatchery.
Use the swarm hosts to destroy Cham’s Army
Use the swarm hosts to cancel a hatchery losing Cham 75 minerals
His army is helping me to win. Why would I want to fight his army?
To reinforce the issue with how common thermally-throttled CPUs are, and how upatree was probably having overheating issues, here is a chick who buys a $1500 CPU with a 330 watt rating but slaps a $50 air cooler onto it.
Most space heaters use 1,000 watts max. So you’ve got a CPU that produces 1/3rd the heat output of a space heater in the footprint of 4x4 centimeters and you think a cheap air cooler is going to cut it?
This is why people see higher performance with the x3d chip variants. They don’t know how to build PCs with good thermal management and so they benefit from a CPU that doesn’t rely on heat output for its performance. They are technically inferior CPUs but from a practical sense, given that most people can’t do proper thermal management, they are actually a better choice.
For contrast, a premium Noctua nh-d15 has a TDP rating of 220 watts. It has a double fan, double radiator, and 33% more heat pipes. Extrapolating, her CPU cooler is going to be 6/8*1/2*220=82.5 watts. Her CPU is going to be running at 1/4 the rated wattage and she spent $1500 on the CPU alone.
Fix your heating issues before you go buying a new CPU.
Deep Seek 3 sends their regards.
There isn’t anyone who cares about sc2 enough to run a bot network on its social media sites. Blizzard also sure doesn’t care enough either. The skills to do that are rare and better spent making $75/hour on projects that actually matter. Nobody has time for the sc2 forums. Reddit, maybe. SC2 forums – definitely not. It’s a good theory because a lot of the posters here and on reddit definitely do behave a lot like bots (bad reasoning, low effort, discontinuity between posts, etc) so I had the same thought. But there is no API to interaact with discourse (the forum software) and developing one just isn’t woth the time. Nobody cares about this place nor reddit. I could see some youtubers caring enough to maybe promote their reddit content with bots, but there are services for that already. You can literally buy upvotes and comments. How often is that used? Hard to tell but I am sure you could find suspicious posts with a statistical analysis. It’s an interesting question but no real practical uses. I could see an academic doing it to get their phd or maybe activist against dead internet, or maybe a youtuber rivalry.
EDIT: yeah so you are talking about the nvidia sell off. It’s boomers who don’t understand AI tech. Sabine Hossenfelder had a good video about it. There is a natural limit to the accuracy that emerges in the training time and this is true across every known model. Deepseek’s claims are improbable because it would be a steep departure from past trends. Even if Deepseek’s algorithm is more efficient, it would still be faster to do it on a GPU unless of course the algorithm is totally incompatible with GPU architecture and I’ve seen nothing to indicate that. I would be surprised if their training model is open source. It’s probably the client that runs the model that was already trained that is open source. There is no way China would allow the open-sorcing of an algorithm like that especially when they’ve work hard to develop it as a response to CPU and GPU embargos. So we can’t say if the algorithm is incompatible with the GPU architecture or not (probably not). So it’s a lot of superstitious reasoning not based on the actual science and engineering of AI.
AI research has been the majority of my time as a software developer. I’ve worked on a program that finds equations to represent systems, to find alternative ways of computing the same answers (to avoid patents) and to find approximate solutions to known problems that are faster than the known methods. That was a big part of my life a couple years ago. To this day, I have an AI running on my gaming PC (since I am in Texas helping a friend) and it optimizes centrifugal impellers for car turbochargers. On the CNC machine I prototyped, I plan to build one and install it on my car. While relaxing here, I’ve been working on a new version of the CPU block designing AI that made my current CPU block. The hardest part of an AI project is to define the solution space in a way that makes it searchable and that requires a deep understanding of the actual system you are modelling – you simply don’t know what the ideal parameters are for a given application, and want to optimize the parameters. The program has to have a way of understanding how to not create nonsense parameter combinations as a method for iterating the design based on past testing.
Well, these algorithms can’t be parallelized on a GPU because the way the GPU is so fast is by doing a lot of the same calculation in parallel, while these algorithms require steps in series. On a GPU, you can calculate A, B and C at the same time, but if C depends on B and A depeds on C then you can’t use a GPU. That’s the gist of it.
AI models are usually matrix multiplication and mass amounts of it so GPUs are very good at it. They use a gradient descent algorithm like simulated annealing where values in matrices are randomly tweaked and if it improves the result then the tweaks are kept and if not they are discarded. When you have a chain of 1,000 matrices per solution, the probability of finding a tweak that improves the algorithm is very low, so this has to be millions of times to find each improvement.
If deepseek is incompatible with the GPU architecture, it would have to do no matrix math nor any parellelizable calculation. It’s very unlikely that it wouldn’t benefit from GPU acceleration and that means the GPU sell off is dumb. You could say that maybe their training model is better and that that means a lower demand, but it’s very unlikely due to the natural law I was mentioning. There isn’t a name for it and it isn’t proven, but it’s there in the data. Proving that such a limit exists is probably possible, and it would probably be an adaptation of the Shannon coding theorem. Compression models as well as data encoding models used in telecommunications have been thoroughy explored and we know the limits of what’s possible. Shannon’s theorem tells you how well you can predict a future character based on past characters which is how data compression works. That’s basically what language AIs do and it almost certainly has a limit similar to the Shannon coding limit.
Anyway I haven’t been following the news because I am busy as heck down here. We’re talking 11 hour days on my feet – sheetrock, tile, framing, installing windows, plumbing, you name it. It’s a full remodel of 2 bathrooms. So anyway I thought this chick I knew was in a relationship with another chick, but it turns out my friends girlfriend is just a friend-girl. My friend let her stay at her house to help her out. I ask her, so, you trying out something new, as I tilt my head at the friend-girl. She almost chokes and says no I am just helping her out and she’s really smart and deserves a second chance. She has had a hard life but she is super smart. She tought herself how to read at the age of 7. I am thinking Y I K E E S. Her spelling to this day is still atrocious, she continues, and I am thinking oh no this girl is dumb as a rock and my friend is just lonely and needed some company. So anyway my friend is helping me tile a bathroom floor and you spread out the mortar and place a tile, clean the edges with your finger and and place levelling clips under the tile which pinch neighboring tiles together to keep them level. Well anyway each time I used my finger to wipe away the excess mortar I would flick it into a pile of mortar and it would make a noise. So I am going about my business and she says “that noise is so satisfying” and I am like what noise. She says that noise as I do it again. Well it has a rather striking resemblence to a clapping noise of a rather prurient nature. I wasn’t really paying attention to what she was doing as I was zoned in, so I look up at her and say what is that supposed to mean. Is that some kind of hint. She sticks her tongue out, bites it, looks up into the air and shakes her head before saying what do you think it’s supposed to mean. Her hair was waving as she tilted her head. She then runs off giggling. I am just sitting here thinking what an odd time to make a move. I am sweaty, covered in mud and dirt. I guess the mortar flicking was a trigger. Whatever. You don’t look a gift horse in the mouth, if you know what I mean. Well anyway I’ve got my hands full down here until this Saturday. We were having so much fun I stayed another week and it will be hell to pay when I return home but oh well. She has to go back to being lonely as she hangs out with her new roommate. How long until the roommate overstays her welcome?
This girl is a bit of a tomboy and so she has a lot of random tools, but mostly automotive. I am in her garage and there is a random small block chevy crate engine up on an engine mount. She bought an old hot rod to work on it, lost interest, sold the hotrod but still has the engine she bought. So anwyay she doesn’t have any sheetrock tools nor any tiling tools, so I am now buying 300 and 150 dollar tools so I can have this finished for her before the week is over. I tell her if she wants to keep them, she can have them, but if she doesn’t want them to please resell them for 75% the value and send the money back to me. It’s another tool for the collection but it’s also space in your garage.
Meanwhile the roommate is complaining that things are taking too long. So my friend tells her she should pitch in by banging out some concrete. Well this girl decides to do the whole thing. Puts in insulation, levels the wall, puts up wallboard. All on her own, she completed a process that took me 3 days. You might think wow that’s amazing but it looks like an orangutan was let loose with a hammer. It’s shaping up to be the worst tile job I’ve ever seen, and I’ve seen some really bad ones. I show this to my friend and she doesn’t want to be mean to her guest so she’s like well maybe if you show her why it’s not going to work, like you showed me, then she will understand when we rip it all out and redo it. I am like oh man this girl has adopted this other girl and treats her like a child. So anyway, she invites her to come out and the girl refuses, and now she’s a recluse and probably has a grudge against me. Oh well. Anyway, where do you think the name Slam’er came from. Yep, I’ve laid so much pipe across so many states they ought to sent me my plumbing license by now.
ax^3 + bx^2 + cx + d = y
This can be parallelized because you can compute each term separately. The terms have no co dependence until they are added together. This would be a good use of SSE instructions.
Now if it were,
(aX+bX+Cx)^3 = y
Then there wouldn’t be much to parallelize because there are operations dependent on previous operations. You can think of it like operations in chains vs branches. If the logic branches, it can be parallelized. If the logic chains, it can’t be parallelized.
Browsed the paper. They assign each word an index in a matrix blah blah. Then found this. They rely heavily on GPU computation, specifically the nvidia architecture, and are asking for improvements to the precision of multiplication as well as changes to how the tensor cores access main memory. They note bottlenecks moving data between gpus, moving data from a gpus memory to the output buffer and visa versa, as well as limitations in the memory layout. So they want to decide how memory is allocated and can’t. They want more control over memory allocation and communication buffering. This is ironically a very similar problem as to why sc2 isn’t optimized for the cache of x3d chips. You’d need extremely fine tuned code to take advantage of it and there is a 0.0% chance sc2 has anything like that. They mention slow downs due to frequently transfering data between cuda and tensor cores. They say they like to do both matrix multiplication and transposition without having to do a transfer, but the current architecture doesn’t allow for it. I guess the tensor cores can’t transpose and it has to pass it back to the cuda core to transpose it and then back to the tensor core for additional operations.
Anyway it’s pretty obvious they use nvidia GPUs and that they will continue to do so. Futhermore, they are asking for performance enhancements for future gpus. No idea why nvidia investors freaked out. The blackwell chips are supposed to do something different than the chips used by deepseek so a change in demand for one doesn’t indicate a change in demand for the other.
This is my suspicion. Deepseek’s method looked for short term gains but will ultimately run out of steam due to scaleability. A lot of the things they are complaining about sound like scaleability issues. They complain about flushing communication buffers to send data between gpus for example which is an obvious scaling issue.
There is a common “scam” in the startup industry. A common place it occurs is in fusion research. A company makes a new reactor that generates a lot of hype, gets a lot of funding, and then they “research” the reactor by chilling on a million dollar yacht while sipping margaritas. Whether or not deepseek is or will do something like this remains to be seen. But I am very skeptical of the scaleability of their model considering they complain of rounding errors and things like that. They are operating at the absolute limit of what those gpus are capable of and they don’t have access to the new ones that were designed in cooperation with ai researchers. They demonstrated professional graphics cards (not optimized for AI but only 1 year old) can compete with what chatgpt was trained on (optimized for AI, but 4 years old). However, they are operating at the limit of what the h800’s are capable of, which means the chip embargo is going to be a brick wall that they can’t get past. The idea that these chips will be able to compete into the future for AI scaling is absolute nonsense in my opinion. Professional graphics cards are optimized for graphics operations like 4x4 matrix multiplication and there’s going to be some hard limits to what you can achieve on hardware designed for graphics.
Anyway I talked about this a couple years ago when the chip embargos started. The chip industry is extremely complicated and has dozens of sub industries and those have sub industries. The machines that etch the silicon require lenses manufactured to perfection which requires machines built by machines built by other machines. So china would have to replicate the whole process by reinventing it from scratch. The problem is, nobody understands the whole process. Each sub industry has its own experts and specializations. The embargo ensures they are permanently behind because by the time they reach a waypoint 10 years from now we will be 10 more years ahead. Because this sort of thing has an exponential growth factor, it’s going to be a big difference. The problem is, china has a larger pool of skilled labor that might fudge the numbers. This industry is run by genuises from top to bottom. The people to run the machines have incredibly rare skill sets which means the availability of skill is a huge bottleneck. That’s why vivek wants more h1b workers. They think we need to draw from India in order to make sure the talent pool doesn’t fudge the exponential growth too much. I don’t think that’s the case because CS grads from the US are a full standard deviation better than ones from India, so the talent pool is larger but it has a much lower baseline average. The number of people who can actually help and competently match the skill of american or japanese workers is much lower than they are hoping it is.
Musk is in a bit of a cross roads because he wants to provide internet service to every rural location on the planet, but that obviously presents issues when the US is sensitive to the growth of industries inside china. They will obviously be sensitive to the leakage of ideas into china via things like youtube. It’s impossible to know what information is holding a chinese industry back and some random guy watching a random youtube video could fill in a blank that causes them to leap forward in an industry. The internet algorithms are finely tuned to hone in on your problems and present you with information that helps you solve them. If you mention a back ache next to a microphone, it will suddenly start showing you youtube videos about how to stretch your back. That’s an obvious national security issue and the expansion of starlink could aggravate it drastically.
The San Francisco-based ChatGPT maker told the Financial Times it had seen some evidence of “distillation”, which it suspects to be from DeepSeek. The technique is used by developers to obtain better performance on smaller models by using outputs from larger, more capable ones, allowing them to achieve similar results on specific tasks at a much lower cost. Distillation is a common practice in the industry but the concern was that DeepSeek may be doing it to build its own rival model, which is a breach of OpenAI’s terms of service.
They are saying that deepseek might be sending queries to chatgpt and/or precomputing some answers using chatgpt’s model and using that as a seed or starting point to train their own model. Since a GPT-trained model wiould be already very finely tuned, little work would be left to train it any more which could be why they are so much more cost efficient.
There is a golden rule to cheating and it’s that you can’t cheat too much. Dream, the minecraft youtuber, learned that the hard way. Cheaters can bend the rules a little bit to give themselves an edge, but that requires your model be within striking distance of chatgpt on its own. If you can’t match chatgpt, because you have a tiny 5 million budget, the only option would be a big-cheat but then those are super obvious. We don’t know if that’s what’s happening, but we do know deepseek diverges from a performance pattern observed across the entire ai industry. So either a tiny chinese company did what openai couldn’t, without access to the latest hardware and at a fraction of the cost, or it’s too good to be true.
Shannon’s coding theorem tells us the limit that data can be compressed and it’s a similar concept to text prediction AIs. The better you can predict the next word, the more the data can be compressed. The difference is, with AI you also have a training time parameter. So, there is a limit to how good the predictions are based not only on the natural randomness of the data but in how much time is used to train the model. I am not aware of a proof but this limit almost certainly exists and almost certainly is provable (it’d be an adaptation of shannon’s theorem to include a training time parameter). I’d guess it’s very likely deepseek’s AI exceeds that limit. It seems the GPT guys think they get past that limit by “distillation”.
The game is slow on older CPU’s and it bogs down badly in team game battles. Replays are a good way to benchmark CPU bottlenecks. Since the code isn’t optimized for newer CPU’s, the best and easiest way to tell is just test them. Generally speaking, more cores doesn’t help SC2 run faster. Clock speed does correlate well, but isn’t the sole factor.
The funny thing about it is that all the unit related game logic could be computed in parallel across many cpu cores. It used to be that parallel programming was a pain because you’d have to deal with atomic variables, atomic instruction sets,and you’d have to program lockfree queues and lists. Obviously, being able tp push and pull off a queue is very difficult to do at the same time from multiple threads. It is very difficult because order of operations isn’t preserved, among other things. Inmodern C++, however, it’s extremely easy to do because you find where the code loops through each unit to compute per unit logic, you replace the loop with a parallel for, and then you find any data structure referenced during this loop and replace it with a lock free one or protect it with a mutex. There are better ways to add multithreading but this is extremely fast and easy. You could probably have it working in a day or two. Afterwards, it would use every available cpu core when computing unit logic, meaning it would be much smoother for team games and especially for 4v4 maxed fights.
It m ight affect replays because the game logic might somehow rely on the ordering of units in the lists / the ordering of the computing of the logic for each unit. Multithreading won’t preserve that ordering so reconstructing replays would probably break, and maybe even multiplayer,so you’d have to debug that and figure out what logic is dependent on unit ordering and rework it to work independently of the ordering. Either that or somehow preserve the ordering despite the multithreading.
An example of this is if two marines are fighting. If the unit logic goes does damage → is health at 0? → trigger unit death then the ordering of units in a list would matter and the marine that was first in the list would survive while the other wouldn’t. Multithreading would break this ordering because you can’t guarantee each unit is processed in the same order on each PC in the game. But if all damage for all marines is calculated, then death triggers applied in a separate step, it wouldn’t be an issue because you could guarantee death triggers happen in the same way regardless of unit ordering. A consequence of this is that both marines would die simultaneously.
Reports are saying that if you ask deepseek if it’s ChatGPT, it will say it’s ChatGPT. It’s a mistake for this information to be revealed because in doing so you provide cheaters with feedback to hone their cheating skills. It’s better to sue. When dealing with claims of propriety, the courts can have an expert look at the evidence behind closed doors to determine their truthfulness, thus preserving the trade secrets. This would allow them to establish if cheating happened without revealing their methods to the public. Future AIs that use this style of cost cutting will probably take measures to avoid being caught as a result of this information being revealed. This is of course still speculation at this stage. I do think it is safe to say that nvidia investors sold prematurely. Tech boomers don’t understand the science of computers and so they can be easily tricked by stuff like this. Wouldn’t it be interesting if deepseek were some elaborate short-sell scheme. That would be genius actually.
Hello Bob,
I will take Too Good to be True for $50.
What is the name of the AI claiming to have shattered industry performance trends?
That would be the reason a republican put forward a bill to make it illegal to use. lmao ever since Biden dropped out the US has actually become second in the world order.
Tech boomers gonna boomer.
Several countries folded like cheap suits in the trade wars. Mexico lasted, what, a day, once they realized their economy was nose diving into a depression. That’s not exactly the US losing. If anything the US is more powerful than ever. Truly it’s a realization of existing power which to this day is latent. It has always confused me why America is so obsessed with problems on another continent yet somehow can’t find time to deal with problems that kill tens of thousands of americans every year. Opiates alone kill 100k people a year. There’s an ocean between the US and EU/middle east/china/india so what do we care. If they want us to care, they need to pay us. I think the US needs to stop being a charity to a bunch of rando countries that don’t have any allegiance to us at all. We’re sending money to countries that hate us. Europe has a very strong dislike for American culture. I’d estimate Europe is at least 300 years behind the US in terms of social development. The US’s embracement of freedom of speech will allow open source technology to eventually break up all traditional power clusters. Open source machinery will break up manufacturing monopolies that have existed for hundreds of years. Right now EU is headed into a technocratic dystopia where sensors monitor everything you do and restrict your access to transportation and your bank account and gasoline based on how well you kiss butt to the wanton desires of the elite. They are repeating the same experiment that lead to the Soviet Union, but with modern day surveillance tech. Nothing good is going to come from that and the US needs to decouple from it ASAP. It’s going to be disastrous. Meanwhile in the US, people will be 3D printing their own open source cars by grinding up aluminum soda pop cans. The EU’s cars will void your warranty if you put too many bricks in the trunk meanwhile in the US you can make whatever car you want to make and nobody will be able to stop you. The same will be true for every technology. The EU is headed for ruin and the US is headed for Mars and beyond. But we gotta focus on problems in our own neighborhood and ally ourselves with people within our proximity. It’s like a long distance relationship. It’s not gonna work no matter how pretty the girl is and how much she is into you. The #1 thing needed to build a relationship is proximity and if you don’t have that then you don’t have a relationship. Well it’s the same thing for international politics. We’re trying to build relationships with countries so far away and so irrelevant meanwhile we ignore opportunities on our own doorstep. So not only are you trying to build a relationship with some chick living 1,000 miles away, you’re ignoring the chick next door who turns her head every time you walk by. The reason why the EU can afford socialism is because the US foots their protection bill. Take away the free protection and the EU’s economy is going to collapse because it’s built on completely unsustainable models. As is, Brittan’s healthcare system is strained. What happens when suddenly they have to cough up all the money needed to defend Ukraine which the US used to contribute. So they virtue signal how superior they are to us by using our money to do it. Time to pull the plug and let the dust settle where it may. Not our problem.
The reason open source technology will propel the US to Mars and beyond is because open source technology is more available to more people, which increases the development speed and increases the probability of major advancements. The reason a total embracement of free speech is necessary to make this happen is because free speech laws protect the open source designs as speech. The EU will lock you up over a facebook post. What do you think they will do if you invent something that disrupts their power structure? It’s off to the gulags with you, bucko. Oh sorry did we say “gulag”? We actually meant “The amazing young men’s learning initiative” [which just so happens to be exactly like a gulag].
The EU’s inability to embrace free speech is going to cause them to nose dive into a technocratic dystopia that uses AI to micromanage citizens. There will be no ability to fix the issue because the AI / surveillance state will be too dense of a power concentration to ever break up.
If you want to understand how this works, study the works of Voltaire: “It is dangerous to be right in matters where established men are wrong.” He perfectly captures EU’s dilemma in a mere 13 words. Genius.
The implication of his statement is that the technocratic state will actually stomp out innovation because the innovation makes the state look bad / incompetent etc. Their culture is just too antiquated and outdated for a modern day first world society.
EU’s dilemma is that The US got jealous of Russia selling oil and gas at a price that is the most expensive in the world. It makes senes now that the US is budgeting, can’t spend up to 5 billion on Ukraine’s elections in 2014 and not seek out the rewards.
US will exchange their investment into Ukraine for greenland. Access to resources isn’t as big of a consideration anymore because the types of technological advancements that need to be made will always be rejected by authoritarian governments. They will never embrace technologies or social policies that undermine their rule. They can have all the oil in the universe and they will be forever stuck in the stone age. Authoritarianism and collectivism are inherently anti-science and anti-technology. The only time they aren’t that is when the authoritarian has omniscience. No one man nor any body can ever achieve omniscience and as long as that’s true decentralized thinking and planning will always outperform collectivism. The authoritarian believes he is God and that he knows what is best for all of mankind, but that isn’t true (in fact, it’s impossible to achieve) and so he’s just an maniacal narcissist who refuses to accept his own limits and has to force his own preferences onto other people to “save” them (which he does to feed his superiority complex). EU will not say no to a greenland deal because they don’t want to deal with a two pronged attack.
Which is why chinese knock offs of AIs can be spotted as fakes from outer space 1 day after being announced.
I’m convinced the main idea behind that is so the Americans don’t get confused about Iceland being the lush green environment and greenland being the one covered in ice.