Computer algorithms solve problems all over the world for companies already. I bet airlines already have teams of people using computer algorithms to figure crew management, flight routing, cost optimization, etc.
The fact that they’re exploring quantum computers and non-classical algorithms just suggests that gate allocation is NP-Hard. Sure things go wrong when computers fail already, Look at Southwest or Delta’s recent meltdown, but to act like this a bad thing is just nonsense. This should be looked at as a good thing that airlines are working on.
Why do you think this is going to replace air traffic control work? It’s picking which gate to park the plane at. This was done by airline and airport operations teams, not ATC. Imagine if you could automatically pick gates to reduce the time a plane spends taxiing and/or minimize time passengers spend walking. That’s 100% a useful application for computer optimization algorithms. Humans aren’t going to do that better and it’s not a function of safety that tower or ground control needs to do.
If you are port forwarding. I recommend not exposing it on the default port of 25565 and instead expose it as a random port. Then, assuming you have a domain name, create an SRV record that points to your IP and port. This will cut down on the drive by scanners who scan by ports, but won’t totally eliminate it. If you do use the SRV record, your friends won’t even notice there’s a different port.
The alternative is to let certain countries de facto claim a region because others are too afraid to call them on their BS
As a professional software dev, I worked with pretty much every OS daily. My personal computer was a Windows, my work laptop was a Mac, and I ran my code on Linux so I was familiar with the things I liked and disliked about each. I also ran my own set of server with my websites, mail servers, and various research projects to learn and grow.
Then I decided it was time to order a new laptop and I didn’t want to go to Windows 11 because I felt Microsoft was going too much into features I didn’t want like Ads, more tracking, pushing AI. Don’t get me wrong, I like AI, but it was too much about forcing me to use it to justify their stock valuations.
I also was working on reducing my usage of big tech, setting up self hosted services like pi-hole, Home Assistant, starting to work my own Mint alternative. It just felt natural to get a Framework laptop and try running Linux on it.
I still have a Windows desktop for games and other things, I still use Mac at work. I still like the Mac for it’s power efficiency and it doesn’t get as hot. Linux has some annoyances here and there, like dbus locking up, or weird GNOME issues, or for a while my screen would artifact until set some kernel params, or the fact that my wifi card would crash and I had to replace it with an Intel card, but I’ll stick with it.
There’s two main ways of doing geo-based load balancing:
Of course, this doesn’t matter for companies that only have one data center.
Sorry, what do you mean route it directly? Maybe I didn’t clarify well enough.
My DNS is routed over the VPN but Internet traffic is routed directly. The problem is the load balancing is done based on where the DNS server is so say Google even though the traffic egresses directly to the internet bypassing the VPN it still goes to a Google DC near my home. Not all websites do this so its not always an issue.
Yes, but if you hit a company doing DNS based load balancing, DNS is going to return an IP that’s near to your DNS server which may not be near your device. That’s going to add to the latency.
I have Wireguard and I forward DNS and my internal traffic from my phone over the VPN to my pi-hole at home. All other traffic goes directly over the Internet, not the VPN. So that means only DNS encounters higher latency.
However, because a lot of companies do DNS based geo load balancing that means even if I’m on the east coast all my traffic gets sent to the West Coast because my DNS server is located there. That right there has the biggest impact on latency.
It’s tolerable on the same continent, but once I start getting into other continents then it gets a bit slow.
Right, it’s a lot better to give somebody a better alternative first if you want the public on board. Build up public transit, build up regional and high speed rail and leave planes for long distances that are unfortunately suited for trains and cars (e.g. international, cross-continental, etc.)
I think this a problem with applications with a privacy focused user basis. It becomes very black and white where any type of information being sent somewhere is bad. I respect that some people have that opinion and more power to them, but being pragmatic about this is important. I personally disabled this flag, and I recognize how this is edging into a risky area, but I also recognize that the Mozilla CTO is somewhat correct and if we have the option between a browser that blocks everything and one that is privacy-preserving (where users can still opt for the former), businesses are more likely to adopt the privacy-preserving standards and that benefits the vast majority of users.
Privacy is a scale. I’m all onboard with Firefox, I block tons of trackers and ads, I’m even somebody who uses NoScript and suffers the ramifications to due to ideology reasons, but I also enable telemetry in Firefox because I trust that usage metrics will benefit the product.
Why is telemetry useful or why is it needed to use pi-hole to block telemetry?
Telemetry is useful to know what features your customers use. While it’s great in theory to have product managers who dogfood and can act on everyone’s behalf, the reality is telemetry ensures your favorite feature keeps being maintained. It helps ensure the bugs you see get triaged and root caused.
Unfortunately telemetry has grown to mean too many things for different people. Telemetry can refer to feature usage, bug tracking, advertising, behavior tracking.
Is there evidence that even when you disable telemetry in Firefox it still reports telemetry? That seems like a strong claim for Firefox.
Accidentally typo your password and get blocked. And if you’re tunneling over tor, you’ve blocked 127.0.0.1 which means now nobody can login.
Fears raised over ‘Chinese spy cranes’ in US ports
There are concerns that the machines are effectively Trojan Horses for Beijing and could be used to sabotage sensitive logistics
Unexplained communications equipment has been found in Chinese-made cranes in US ports that could be used for spying and potentially “devastate” the American economy, according to a new congressional investigation.
The finding, first reported by The Wall Street Journal (WSJ), will stoke American concerns that the cranes are effectively Trojan Horses for Beijing to gain access to, or even sabotage, sensitive logistics.
The probe by the House Committee on Homeland Security and the House select committee on China found over a dozen pre-installed cellular modems, that can be remotely accessed, in just one port.
Many of the devices did not seem to have a clear function or were not documented in any contract between US ports and crane maker ZPMC, a Chinese state-owned company that accounts for nearly 80 per cent of ship-to-shore cranes in use in America, according to the WSJ.
The modems were found “on more than one occasion” on the ZPMC cranes, a congressional aide said.
“Our committees’ investigation found vulnerabilities in cranes at US ports that could allow the CCP [Chinese Communist Party] to not only undercut trade competitors through espionage, but disrupt supply chains and the movement of cargo, devastating our nation’s economy,” Mark Green, the Republican chair of the House Homeland Security Committee, told CNN.
The Chinese government is “looking for every opportunity to collect valuable intelligence and position themselves to exploit vulnerabilities by systematically burrowing into America’s critical infrastructure,” he told the WSJ, adding that the US had overlooked the threat for too long.
The Telegraph has contacted ZPMC for comment.
‘The new Huawei’
A spokesman for the Chinese embassy in Washington DC said claims that Chinese-made cranes pose a security risk are “entirely paranoia.”
The US investigation began last year amid Pentagon fears that sophisticated sensors on large ship-to-shore cranes could register and track containers, offering valuable information to Beijing about the movement of cargo supporting US military operations around the world.
At the time, Bill Evanina, a former top US counterintelligence official, said: “Cranes can be the new Huawei.”
“It’s the perfect combination of legitimate business that can also masquerade as clandestine intelligence collection,” he told the WSJ.
In recent years, a handful of Chinese crane companies have grown into major players in the global automated ports industry, working with Microsoft and other companies to connect equipment and analyse data in real-time.
Paperless does support defining a folder structure that you can use to organize documents within that paperless media volume however you should treat it as read only.
OP could use this as a way to keep their desired folder structure as much as possible, but it would have to be separate from the consumption folder.
I don’t fully understand what you’re saying, but let’s break this down.
Since you say you get an NGINX page, what does your NGINX config look like? What exactly does the NGINX “login page” say? Is it an error or is it a directory listing or something else?
Then try something like:
Create Quanity unit of ml and a liter unit
In your product use: Unit stock: bottle or liter Unit purchase: bottle Consume: ml Price unit: ml
Set a product specific QU conversion of bottle to ml
Weirdly, the quick consume unit is based on the stock unit, not the consume unit. That seems like a bug.
The problem with Grocy is that going too fine grained means you’re unlikely to keep it up to date or it be accurate. I would not try to track your usage in ml. Just track it at the bottle level.
However you can still track the price per ml because grocy lets you independently set units. Just define a mapping between bottle and ml.
If you’re running Docker for servers not development, then you can make Hyper-V work. I used to do that before I got a separate Linux server and it worked out.
Just setup a network adapter that gets bridged to your Ethernet adapter, then create a VM that uses that bridged adapter. The Linux VM will appear like its another computer on your LAN and you can use Docker with host Network.
They’re forging the GPS to look like it’s in the EU. Do we turn off all the EU terminals too?
I’m a little surprised they can’t identify spoofing by comparing the incoming signal to the proposed location. They already have antennas that can be steered using the phased array.