• 7 Posts
  • 58 Comments
Joined 1 year ago
cake
Cake day: June 15th, 2023

help-circle
  • Endless Sky. The save game is a text file. Save a file on the mobile app (F-Droid), and on the PC (Flatpak), and note the last line. This is the line you must swap to transfer the save file. It is the first game I have played on both practically. The game mechanics are different between the two and you need to alter your strategy accordingly. On mobile, I travel with a ship setup for boarding pirate vessels and never target enemies directly; all of my guns are automatic turrets. I just use a fast ship and travel with a large group of fighters. It is more of a grind on mobile, but it can be used to build up resources and reserves. The game is much bigger than it first appears to be. You need to either check out a guide or explore very deep into the obscure pockets of the map.


  • I won’t touch the proprietary junk. Big tech “free” usually means street corner data whore. I have a dozen FOSS models running offline on my computer though. I also have text to image, text to speech, am working on speech to text, and probably my ironman suit after that.

    These things can’t be trusted though. It is just a next word statistical prediction system combined with a categorization system. There are ways to make an LLM trustworthy, but it involves offline databases and prompting for direct citations, these are different from Chat prompt structures.



  • I loved Dread and Prime 2. I tried playing Super Metroid on switch, but the controls are just too poor to pull off the advanced combination moves with the slow low quality emulation. I’m disappointed that there are not a dozen Metroid titles on the switch. Everything in the Prime series should be ported.

    I’m mostly referring to the long hiatus(es) before Dread, and all of the nonsense from developers other than Retro Studio. I understand they were probably in a funky position when it came to writing and coding for a new 3D engine after all of the Prime series had played out the life of the prior engine. IMO, the entire SDK for Nintendo hardware should account for key franchise titles like Metroid. These games should have story boards and plans from first light of new hardware. The plans should always include classic titles too. My biggest complaint about Nintendo is the low quality of most titles on the platform. They are too focused on recruiting developers instead of quality games. Sure there are some great games like BotW, TotK, and Dread, but I’m not going sifting through all the junk in their store to try and find anything else worth playing. I got a couple of titles that a lot of people recommended, and hated them with no recourse and they cost as much as good games. I would have paid for and played all of the Prime series if it had been ported, but Nintendo totally fails at maintaining their legacy titles effectively. It is this lack of availability now, and the stupid fumble of letting extra developers with their own forked vision into the franchise that I am calling a fumbled opportunity.


  • Yeah but MG is WAY older @ 1987 vs GoW in 2005 and ES in 1994.

    Metal Gear Solid was one of the best games on the original PlayStation. I haven’t been into consoles since the PS2. Metal Gear Solid was so good compared to anything else at the time, the idea it is only at 60M now, seems like a major fumble and lack of management. I guess it is like Metroid for being underdeveloped or given to idiots “with a new vision” like in the case of Metroid.



  • Have you seen the great gatspy with Wizard too? That’s what always comes up when mine goes too far. I’m working on compiling llama.cpp from source today. I think that’s all I need to be able to use some of the other models like Llama2-70B derivatives.

    The code for llama.cpp is only an 850 line python file (not exactly sure how python=CPP yet but YOLO I guess, I just started reading the code from a phone last night). This file is where all of the prompt magic happens. I think all of the easy checkpoint model stuff that works in Oobabooga uses python-llama-cpp from pip. That hasn’t had any github repo updates in 3 months, so it doesn’t work with a lot of newer and larger models. I’m not super proficient with Python. It is one of the things I had hoped to use AI to help me learn better, but I can read and usually modify someone else’s code to some extent. It looks like a lot of the functionality (likely) built into the more complex chat systems like Tavern AI are just mixing the chat, notebook, and instruct prompt techniques into one ‘context injection’ (-if that term makes any sense).

    The most information I have seen someone work with independently offline was using langchain with a 300 page book. So I know at least that much is possible. I have also come across a few examples of people using langchain with up to 3 PDF files at the same time. There is also the MPT model with up to 32k context tokens but it looks like it needs server machine ram in the hundreds of GB to function.

    I’m having trouble with distrobox/conda/nvidia on Fedora Workstation. I think I may start over with Nix soon, or I am going to need to look into proxmox, virtualization or go back to an immutable base to ensure I can fall back effectively. I simply can’t track down where some dependencies are getting stashed and I only have 6 distrobox containers so far. I’m only barely knowledgeable enough in Linux to manage something like this well enough for it to function. - suggestions welcome



  • WizardLM 30B at 4 bits with the GGML version on Oobabooga runs almost as fast as Llama2 7B on just the GPU. I set it up with 10 threads on the CPU and ~20 layers on the GPU. That leaves plenty of room for a 4096 context with a batch size of 2048. I can even run a 2GB Stable Diffusion model at the same time with my 3080’s 16GBV.

    Have you tried any of the larger models? I just ordered 64GB of ram. I also got kobold mostly working. I hope to use it to try Falcon 40. I really want to try a 70B model at 2-4 bit and see how its accuracy is.



  • Cookies are not needed. They are shifting the security onto the user. Secure the information on the server just like any other business. Offloading onto the client is wrong. It leads to ambiguity and abuses. Visiting a store and a business on the internet are no different. My presence gives no right to my person, searches, or tracking in the location or outside of it. Intentions are worthless. The only thing that matters is what is possible and practiced. Every loophole is exploited and should be mitigated. The data storage and coding practices must change.


  • Nah, it should be the default state of affairs. Data mining is stalking and theft. It centers around very poor logic and decisions.

    Things like browser cookies are criminal garbage. Storing anything on a user’s computer is stalking. Draw the parallel here; if you want to shop in any local store, I want you to first tell me everything you are wearing and carrying in a way that I can tell every possible detail about it, tell where you came from before you visited this store, where you are going next. They also want to know everything you looked at, how you react to changes in items presented to you and changes in prices. They want enough information to connect you across stores based on your mode of transportation, and have enough data to connect your habits over the last two decades.

    Your digital existence should not be subject to slavery either. Ownership over ourselves is a vital aspect of freedom. Privacy is about ownership and dominion. If you dislike all the digital rights management and subscription services nonsense, these exist now as a direct result of people neglecting ownership. In the big picture, this path leads all of humanity back into another age of feudalism. The only difference between a serf and a citizen is ownership over property and tools. Everything happening right now is a battle over a new age of slavery. “You will own nothing and you will be happy about it.” Eventually this turns into 'Your grandchildren will own nothing and say nothing or they will be dead about it." What you do about your privacy now will be a very big deal from the perspective of future generations.


  • Hey there Lionir. Thanks for the post. Can the Beehaw team please look into copying or getting the creator of this bot to work here? https://lemmy.world/u/[email protected]

    I think the person that created that bot is somehow connected to the piped.video project. I know the whole privacy consciousness thing isn’t for everyone, but this bot’s posts are quite popular elsewhere on Lemmy.

    FYI, the main reason to use piped.video links is that it is setup as an alternative front end for YT that automatically routes all users through a bunch of VPNs to help mitigate Alphabet’s privacy abuses and manipulation.



  • Not exactly. Stupid people with advanced tools make stupid outputs. Venture capital is pushing the propaganda sauce hard and a lot of stupid people are jumping on AI as a corporate trend. These are the idiots.

    The tools are next level. We are on the edge of this tech becoming a really big deal. There are several research papers making breakthroughs regularly and making double digit percentile improvements on efficiency and accuracy. The reason it is a big deal is because you can have around 1/4 of the knowledge of the entire internet running on hardware as powerful as a current flagship phone. Sure it lies around 1/2 the time, but these are problems that are being solved. Like, the latest and greatest models are ancient history in a matter of 2-3 weeks. To be honest, have a casual conversation with an offline and uncensored LLM. You may know it is lying from time to time, but if you’re being objective, so are most humans you encounter under casual circumstances. The sociological function and potential value of this tech is pretty powerful medicine. Like if you need someone to talk to, or to talk out an issue in private, this is a way to make that happen.










  • I was a buyer for a chain of high end bike shops for many years. Amazon really only sells junk products. Any real quality brands of niche products can’t support amazon and the typical brick and mortar business inventory structure. Like, I spent between $100k-$500k in preseason bike brand commitments for 3 stores. If any of those brands decided to allow sales on Amazon I would drop them immediately. Multiply this by every bike shop that exists. This is more than Amazon could compete with by a long shot. The issue is that every Buyer in a shop knows what they are able to sell effectively and buys accordingly. I tailored my orders for every shop independently. It would be impossible for Amazon to predict and fund high end bikes at this scale.

    “So what,” you say, “it’s just bikes.” No it is not. The bike brands are usually part of a group of brands that include several parts, clothing, and accessory products. These are part of preseason commitments with the bike brands too. So all of these are not sold on Amazon either. This is the case with most things, the best or even decent stuff is not sold on Amazon.

    The worst thing with amazon is that they aggregate all identical products in their warehouses. This makes it trivial for a seller to insert fake goods into a product pool and it is completely untraceable back to them.