• 0 Posts
  • 27 Comments
Joined 1 year ago
cake
Cake day: June 12th, 2023

help-circle
  • Dualbooting is possible and easy: just gotta shrink the Windows partition and install Linux next to it. Make sure to not format the whole thing by mistake, though. A lot of Linux installers want to format the disk by default, so you have to pick manual mode and make sure to shrink (not delete and re-create!) the windows partition.

    As for its usefulness, however… Switching the OS is incredibly annoying. Every time you want to do that you have to shut down the system completely and boot it back up. That means you have to stop everything you’re doing, save all the progress, and then try to get back to speed 2 minutes later. After a while the constant rebooting gets really old.

    Furthermore, Linux a completely different system that shares only some surface level things with Windows. Switching to it basically means re-learning how to use a computer almost from scratch, which is, also, incredibly frustrating.

    The two things combined very quickly turn into a temptation to just keep using the more familiar system. (Been there, done that.)

    I think I’ll have to agree with people who propose Virtual Machines as a solution.

    Running Linux in a VM on Windows would let you play around with it, tinker a little and see what software is and isn’t available on it. From there you’ll be able to decide if you’re even willing to dedicate more time and effort to learning it.

    If you decide to continue, you can dual boot Windows and Linux. But not to be able to switch between the two, but to be able to back out of the experiment.

    Instead, the roles of the OSes could be reversed: a second copy of Windows could be install in a VM, which, in turn, would run on Linux.

    That way, you’d still have a way to run some more picky Windows software (that is, software that refuses to work in Wine) without actually booting into Windows.

    This approach would maximize exposure to Linux, while still allowing to back out of the experiment at any moment.


  • S410@kbin.socialtoLinux@lemmy.mlI dislike wayland
    link
    fedilink
    arrow-up
    21
    arrow-down
    1
    ·
    9 months ago

    Wayland has it’s fair share of problems that haven’t been solved yet, but most of those points are nonsense.

    If that person lived a little over a hundred years ago and wrote a rant about cars vs horses instead, it’d go something like this:

    Think twice before abandoning Horses. Cars break everything!
    Cars break if you stuff hay in the fuel tank!
    Cars are incompatible with horse shoes!
    You can’t shove your dick in a car’s mouth!

    The rant you’re linking makes about as much sense.






  • You’re linking a post… From 2010. AMD replaced radeon with their open source drivers (AMDgpu) in 2015. That’s what pretty much any AMD GPU that came out in the last 10 years uses now.

    Furthermore, the AMDgpu drivers are in-tree drivers, and AMD actively collaborate with the kernel maintainers and developers of other graphics related projects.

    As for Nvidia: their kernel modules are better than nothing, but they don’t contain a whole lot in terms of actual implementation. If before we had a solid black box, now, with those modules, we know that this black box has around 900 holes and what comes in and out of those.

    Furthermore, if you look at the page you’ve linked, you’ll see that “the GitHub repository will function mostly as a snapshot of each driver release”. While the possibility of contributing is mentioned… Well, it’s Nvidia. It took them several years to finally give up trying to force EGLStreams and implement GBM, which was already adopted as the de-facto standard by literally everybody else.

    The modules are not useless. Nvidia tend to not publish any documentation whatsoever, so it’s probably better than nothing and probably of some use for the nouveau driver developers… But it’s not like Nvidea came out and offered to work on nouveau to make up to par and comparable to their proprietary drivers.


  • k, so for the least used hardware, linux works fine.

    Yeah, basically. Which raises a question: how companies with much smaller market share can justify providing support, but Nvidia, a company that dominates the GPU market, can’t?

    The popular distros are what counts.

    Debian supports several DEs with only Gnome defaulting to Wayland. Everything else uses X11 by default.

    Some other popular distros that ship with Gnome or KDE still default to X11 too. Pop!_OS, for example. Zorin. SteamOS too, technically. EndeavorOS and Manjaro are similar to Debian, since they support several DEs.

    Either way, none of those are Wayland exclusive and changing to X11 takes exactly 2 clicks on the login screen. Which isn’t necessary for anyone using AMD or Intel, and wouldn’t be necessary for Nvidia users, if Nvidia actually bothered to support their hardware properly. But I digress.

    Worked well enough for me to run into the dozen of other issues that Linux has

    Oh, it’s no way perfect. Never claimed it is.

    I like most people want a usable environment. Linux doesn’t provide that out of the box.

    This both depends on the disto you use and on what you consider a “usable environment”.

    If you extensively use Office 365, OneDrive, need ActiveDirectory, have portable storage encrypted with BitLocker, etc. then, sure, you won’t have a good experience with any distro out there. Or even if you don’t, but you grab a geek oriented distro (e.g. Arch or Gentoo) or a barebones one (e.g. Debian) you, again, won’t have the best experience.

    A lot of people, however, don’t really do a whole lot on their devices. The most widely used OS in the world, at this point in time, is Android, of all things.

    If all you need to do is use the web and, maybe, edit some documents or pictures now and then, Linux is perfectly capable of that.

    Real life example: I’ve switched my parents onto Linux. They’re very much not computer savvy and Gnome with it’s minimalistic mobile device-like UI and very visual app-store-like program manager is significantly easier for them to grasp. The number of issues they ask me to deal with has dropped by… A lot. Actually, every single issue this year was the printer failing to connect to the Wifi, so, I don’t suppose that counts as a technical issue with the computer, does it?

    wacom tablets

    I use Gnome (Wayland) with an AMD GPU. My tablet is plug and play… Unlike on Windows. Go figure.




  • Trying to represent oneself in court is a pretty stupid thing to do, generally.

    I am not a lawyer, I’m pretty you need to be able to defend yourself withing the legal system following all of its rules. You need to know the laws, their quirks, loopholes, etc. to construct your defense properly. Even if the case is complete nonsense, but you lack the knowledge to defend yourself, or the ability to use the knowledge you have coherently, you’ll loose.

    A neat paper a filed in accordance with all the rules, a paper that quotes actual laws and precedents, will, generally, beats oral argument backed by common sense. And that’s in general! Let alone when you’re going against Disney and their nigh infinite army of lawyers.


  • Even with the character in Public Domain, I doubt Disney would be particularly happy with anyone using it.

    They can send cease and desist letter left and right, claiming that “the use of the mouse is fine, but the elements X, Y and Z were introduced in a later work of ours that’s still protected”, even if it’s a plain lie.

    Trying to take Disney to court is suicide.

    The have enough money to hire half the lawyers in the world and make them come up with a lawsuit even if there’s no basis for one. They can stretch the lawsuit process to last years, and yet the fees would be but a fraction of a fraction of a percent in their yearly spending. Almost any defendant, meanwhile, would be financially ruined by it, even if they end up winning.




  • I use Arch + Gnome with VRR patches on my main PC.

    It find it actually easier to use than e.g. fedora or ubuntu due to better documentation and way more available packages in the repos… With many, many more packages being in AUR!

    By installing all the stuff commonly found on other distros (and which many consider bloat), you’ll get basically the same thing as, well, any other distro. I have all the “bloat” like NetworkManager, Gnome, etc. which is known to work together very well and which tries to be smart and auto-configure a lot of stuff. Bloat it may be, but I am lazy~

    Personally, I think it’s better to stick to upstream distros whenever possible. For example Nobra, which is being recommended in this thread quite a lot, is maintained by a single person. In reality, it’s not much more than regular Fedora with a couple of tweaks and optimizations. Vast majority of those one could do themselves on the upstream distro and avoid being dependent that one person. It is a single point of failure. after all.




  • “AI” models are, essentially, solvers for mathematical system that we, humans, cannot describe and create solvers for ourselves.

    For example, a calculator for pure numbers is a pretty simple device all the logic of which can be designed by a human directly. A language, thought? Or an image classifier? That is not possible to create by hand.

    With “AI” instead of designing all the logic manually, we create a system which can end up in a number of finite, yet still near infinite states, each of which defines behavior different from the other. By slowly tuning the model using existing data and checking its performance we (ideally) end up with a solver for some incredibly complex system.

    If we were to try to make a regular calculator that way and all we were giving the model was “2+2=4” it would memorize the equation without understanding it. That’s called “overfitting” and that’s something people being AI are trying their best to prevent from happening. It happens if the training data contains too many repeats of the same thing.

    However, if there is no repetition in the training set, the model is forced to actually learn the patterns in the data, instead of data itself.

    Essentially: if you’re training a model on single copyrighted work, you’re making a copy of that work via overfitting. If you’re using terabytes of diverse data, overfitting is minimized. Instead, the resulting model has actual understanding of the system you’re training it on.



  • OpenSUSE + KDE is a really solid choice, I’d say.

    The most important Linux advice I have is this: Linux isn’t Windows. Don’t expect things to works the same.
    Don’t try too hard to re-configure things that don’t match the way things are on Windows. If there isn’t an easy way to get a certain behavior, there’s probably a reason for it.