• 0 Posts
  • 49 Comments
Joined 1 year ago
cake
Cake day: August 2nd, 2023

help-circle
  • An alternative definition: a real-time system is a system where the correctness of the computation depends on a deadline. For example, if I have a drone checking “with my current location + velocity will I crash into the wall in 5 seconds?”, the answer will be worthless if the system responds 10 seconds later.

    A real-time kernel is an operating system that makes it easier to build such systems. The main difference is that they offer lower latency than a usual OS for your one critical program. The OS will try to give that program as much priority as it wants (to the detriment of everything else) and immediately handle all signals ASAP (instead of coalescing/combining them to reduce overhead)

    Linux has real-time priority scheduling as an optional feature. Lowering latency does not always result in reduced overhead or higher throughout. This allows system builders to design RT systems (such as audio processing systems, robots, drones, etc) to utilize these features without annoying the hell out of everyone else.



  • Pretty sure expiry is handled by the local crowdsec daemon, so it should automatically revoke rules once a set time is reached.

    At least that’s the case with the iptables and nginx bouncers (4 hour ban for probing). I would assume that it’s the same for the cloudflare one.

    Alternatively, maybe look into running two bouncers (1 local, 1 CF)? The CF one filters out most bot traffic, and if some still get through then you block them locally?


  • I’ve recently moved from fail2ban to crowdsec. It’s nice and modular and seems to fit your use case: set up a http 404/rate-limit filter and a cloudflare bouncer to ban the IP address at the cloudflare level (instead of IPtables). Though I’m not sure if the cloudflare tunnel would complicate things.

    Another good thing about it is it has a crowd sourced IP reputation list. Too many blocks from other users = preemptive ban.


  • According to this post, the person involved exposed a different name at one point.

    https://boehs.org/node/everything-i-know-about-the-xz-backdoor

    Cheong is not a Pingyin name. It uses Romanization instead. Assuming that this isn’t a false trail (unlikely, why would you expose a fake name once instead of using it all the time?) that cuts out China (Mainland) and Singapore which use the Pingyin system. Or somebody has a time machine and grabbed this guy before 1956.

    Likely sources of the name would be a country/Chinese administrative zone that uses Chinese and Romanization. Which gives us Taiwan, Macau, or Hong Kong, all of which are in GMT+8. Note that two of these are technically under PRC control.

    Realistically I feel this is just a rogue attacker instead of a nation state. The probability of China 1. Hiring someone from these specific regions 2. Exposing a non-pinying full name once on purpose is extremely low. Why bother with this when you have plenty of graduates from Tsinghua in Beijing? Especially after so many people desperate for jobs after COVID.





  • StarDreamer@lemmy.blahaj.zonetoLinux@lemmy.mlHelp w/ crash
    link
    fedilink
    English
    arrow-up
    3
    ·
    edit-2
    10 months ago

    Look at the line with the asm_exc_invalid_op. That seems like a hardware fault caused by an invalid asm instruction to me. Either something wrong is being interpreted as an opcode (unlikely) or maybe the driver was compiled with extensions not available on the current machine.

    OP, how old is your CPU? And how old is the nic you are using?

    Edit: did you use a custom driver for the NIC? I’m looking at the Linux src and rt_mutex_schedule does not exist. Nevermind. Was checking 4.18 instead of 6.7. found it now. The bug is most likely inside a macro called preempt_disable(). Unfortunately most of the functions are pretty heavily inlined and architecture dependent so you won’t get much out of it. But it is likely any changes you made in terms of premption might also be causing the bug.









  • Some people play games to turn their brains off. Other people play them to solve a different type of problem than they do at work. I personally love optimizing, automating, and min-maxing numbers while doing the least amount of work possible. It’s relatively low-complexity (compared to the bs I put up with daily), low-stakes, and much easier to show someone else.

    Also shout-out to CDDA and FFT for having some of the worst learning curves out there along with DF. Paradox games get an honorable mention for their wiki.


  • The argument is that processing data physically “near” where the data is stored (also known as NDP, near data processing, unlike traditional architecture designs, where data is stored off-chip) is more power efficient and lower latency for a variety of reasons (interconnect complexity, pin density, lane charge rate, etc). Someone came up with a design that can do complex computations much faster than before using NDP.

    Personally, I’d say traditional Computer Architecture is not going anywhere for two reasons: first, these esoteric new architecture ideas such as NDP, SIMD (probably not esoteric anymore. GPUs and vector instructions both do this), In-network processing (where your network interface does compute) are notoriously hard to work with. It takes CS MS levels of understanding of the architecture to write a program in the P4 language (which doesn’t allow loops, recursion, etc). No matter how fast your fancy new architecture is, it’s worthless if most programmers on the job market won’t be able to work with it. Second, there’re too many foundational tools and applications that rely on traditional computer architecture. Nobody is going to port their 30-year-old stable MPI program to a new architecture every 3 years. It’s just way too costly. People want to buy new hardware, install it, compile existing code, and see big numbers go up (or down, depending on which numbers)

    I would say the future is where you have a mostly Von Newman machine with some of these fancy new toys (GPUs, Memory DIMMs with integrated co-processors, SmartNICs) as dedicated accelerators. Existing application code probably will not be modified. However, the underlying libraries will be able to detect these accelerators (e.g. GPUs, DMA engines, etc) and offload supported computations to them automatically to save CPU cycles and power. Think your standard memcpy() running on a dedicated data mover on the memory DIMM if your computer supports it. This way, your standard 9to5 programmer can still work like they used to and leave the fancy performance optimization stuff to a few experts.