• 0 Posts
  • 105 Comments
Joined 2 years ago
cake
Cake day: November 5th, 2023

help-circle
  • greyfoxtoLeopards Ate My FaceTrump supporter is done
    link
    fedilink
    English
    arrow-up
    4
    ·
    14 hours ago

    A lot of the Qanon stuff was about draining the swamp by arresting all of the pedo Dems.

    They were basically told everything they see in the media is a cover for the real plan to catch these people and that there are already all sorts of sealed indictments that they are just waiting to execute at once, etc.

    So some of his most rabbid supporters are realizing something is up about denying this even exists.


  • greyfoxto196@lemmy.blahaj.zonerule
    link
    fedilink
    English
    arrow-up
    3
    ·
    7 days ago

    It isn’t so much about multiple computers, it is perfectly valid to run a “cluster” on a single computer (not really a cluster anymore?).

    I would say high availability/scaling/etc aren’t the main point, those are just features of the real main point. The main point being abstracting hardware/infrastructure from software deployment (mostly limited to containers) .

    Containers brought us the first step of the way. Removing the link between your software and the OS it runs on, but there is still a lot of work to get the infrastructure around that software up and running.

    Do you need multiple servers? Do you need a load balancer? Do you need health checks? Does the software need a persistent volume attached to it? Does it need environment variables set, or a config file mounted? Are some of those secrets that should be grabbed from a secrets management system? Do you need a DNS name for other services to access yours? Did a server die, maybe we should move the software? Network rules/reverse proxy/restart policies/etc/etc

    Kubernetes gives you a set of APIs to control/monitor/maintain/deploy all of the infrastructure around your container as well as the container itself. You can take the same deployment and deploy it to the server in your basement or a cloud provider.

    You as the admin may need to setup those clusters differently because the persistent storage driver for Amazon EKS isn’t going to be the same as the NAS in your basement but once you have all of those components set up for each environment a properly written kubernetes deployment should just work on both clusters without the developer having to make changes to accommodate different infrastructure.

    Kubernetes is just the cloud (as in API driven infrastructure) for containers, but more importantly it is a standard. Unlike normal clouds like AWS/Azure which are proprietary, anyone can implement the Kubernetes. That means you can easily move between cloud vendors, and avoid lock in. All while getting the cloud benefits of automating infrastructure.


  • Yeah it is fairly trivial to check. I called it a SPF record but technically in DNS it is a TXT record. TXT records are just a generic record type used for many different uses.

    Here are a few common DNS commands to lookup TXT records:

    host -t TXT domainname

    nslookup -type=TXT domainname

    dig -t TXT domainname

    For your barracudanetworks example here we get a few TXT records back but we can see spf.protection.outlook.com is in their list and therefore allowed to send of behalf of the barracudanetworks.com domain. All of the other entries are allowed to send of their behalf too so your email isn’t guaranteed to go through Microsoft.

    Judging by the Salesforce/Zendesk stuff they probably have ticketing/customer management systems, which means it might be possible to contact them without going through Microsoft’s email servers. Notifications from those systems would probably be sending email directly to you instead of routing it through Office365.



  • You should check for SPF records as well. If they have SPF records (and Microsoft walks them through setting up those records), they would need one for every mail server sending on their behalf.

    So it appears that in your case here their MX records pointed at their own MTA that then routes at least some of that email to Microsoft. If they are using SPF records to prevent others from spoofing their email addresses, and if they are allowing Microsoft to send on their behalf there would have to be SPF records with Microsoft’s domains in them.

    Still no sure thing but a little more checking that you can do.


  • Your $1 has absolutely changed in value by 10pm. What do you think inflation is? It might not be enough change for the store to bother changing prices but the value changes constantly.

    Watch the foreign exchange markets, your $1 is changing in value compared to every other currency constantly.

    The only difference between fiat and crypto is that changing the prices in the store is difficult, and the volume of trade is high enough to reduce volatility in the value of your $. There are plenty of cases of hyperinflation in history where stores have to change prices on a daily basis, meaning that fiat is not immune to volatility.

    To prevent that volatility we just have things like the federal reserve, debt limits, federal regulations, etc that are designed to keep you the investor (money holders) happy with keeping that money in dollars instead of assets. The value is somewhat stable as long as the government is solvent.

    Crypto doesn’t have those external controls, instead it has internal controls, i.e. mining difficulty. Which from a user perspective is better because it can’t be printed at will by the government.

    Long story short fiat is no different than crypto, there is no real tangible value, so value is what people think it is. Unfortunately crypto’s value is driven more by speculative “investors” than by actual trade demand which means it is more volatile. If enough of the world changed to crypto it would just as stable as your $.

    Not saying crypto is a good thing just saying that it isn’t any better or worse. It needs daily usage for real trade by a large portion of the population to reduce the volatility, instead of just being used to gamble against the dollar.

    Our governments would likely never let that happen though, they can’t give up their ability to print money. It’s far easier to keep getting elected when you print the cash to operate the government, than it is to raise taxes to pay for the things they need.

    The absolutely worthless meme coin scams/forks/etc are just scammers and gamblers trying to rip each other off. They just make any sort of useful critical mass of trade less and less plausible because it gives all crypto a bad name. Not that Bitcoin/Ethereum started out any different but now that enough people are using them splitting your user base is just self defeating


  • When you saw that 20v on the board I assume that was right next to the charge port? There are often fuses that should be very close to that connector that you can check for continuity on. Usually marked with zeros because they act like a zero ohm resistor.

    Even if the fuse is blown that might just be a sign that something further down the line failed but it would be an easy thing to check at least.


  • I am not exactly an expert at this but it could just be from heat. Do you have a multimeter to check if current can pass through it still?

    Either way it seems like this shouldn’t be affecting the laptop when plugged in because it is so close to the battery connector and it looks like the traces are related to the battery connector.

    Do you get anything at all (battery/power LEDs) trying to run off of the battery? Is it possible that the charge port failed and the battery is just dead now? Maybe check the battery voltage to see how far drained it is.



  • Nope, the switch only keeps saves on the internal storage or synced to their cloud if you pay for it. When doing transfers between devices like this there is no copy option only a move and delete.

    There are some legitimate reasons they want to prevent this like preventing users from duplicating items in multiplayer games, etc. Even if you got access to the files they are encrypted so that only your user can use them.

    I think the bigger reason they do this is there are occasionally exploits that are done through corrupted saves. So preventing the user from importing their own saves helps protect the switch from getting soft modded.

    If you mod your switch you can get access to the save files and since it has full access it can also decrypt them, so that you can back them up. One of several legitimate reasons to mod your switch.


  • I just did a playthrough recently and I think it holds up pretty well. A lot of wasted time on little cutscenes like opening Atla/boxes, and switching characters that gets quite annoying, but gameplay was fine.

    One or two bosses that are difficult but a little leveling up, or wiki hints on how to cheese them, and they are a piece of cake. Once you hit the ship dungeon and have easier access to backrooms (since you can buy the fish to enter them) you can grind for gemstones and you end up being able to one hit almost everything from there on out.

    Grinding gets a bit boring after a while, I’ll admit I enabled some fish point cheats in my emulator after I had one character with a maxed out weapon. Clear that I could easily do it myself but wasn’t going to waste that time to upgrade the other weapons I wanted leveled up.


  • greyfoxtoSelf-hosting@slrpnk.netSelf Hosted Private Forums?
    link
    fedilink
    English
    arrow-up
    4
    ·
    1 month ago

    Probably a terrible idea but have you considered a private Lemmy instance? At the end of the day Lemmy/PieFed/Reddit are just forums with conversation threads and upvotes.

    Lemmy is probably way more of a resource hog than the other various php options but from a usability standpoint if you have a favorite Lemmy mobile app it would work for your private instance as well.

    There appears to be a private instance mode that disables federation.


  • greyfoxtoCyberstuck@lemmy.caWhat the frunk
    link
    fedilink
    English
    arrow-up
    15
    arrow-down
    1
    ·
    1 month ago

    Well if you rephrase it for a normal car it doesn’t sound so absurd. “If your hood won’t latch the car won’t let you drive at highway speeds?”

    A failed latch on a front compartment can be very dangerous because it catches the wind if it opens suddenly at 60+mph. At best you are blinded, or it gets torn off to go flying into a car behind you.

    As such, highway speeds should be restricted if the latch is malfunctioning. The real problem here is that Tesla doesn’t like dealers because they want that middleman money for themselves, so you often have to drive quite the distance to get it repaired. If this were a vehicle from any of the other major manufacturers most people are probably only a few miles from their nearest dealer.

    Normal cars have two hood latches. Your primary latch (that you open with the hood pull in the car) and a secondary safety latch (when you reach under the hood to open it fully) so this problem is an extremely uncommon problem for a normal car.

    But since this is a frunk it gets opened a lot more for storage and users would probably not be very happy about having to deal with the secondary latch on a regular basis. So they have motorized those latches for ease of use, and motorizing them adds a lot more points of failure.



  • Since the ER-X is Linux under the hood the easiest thing to do would be to just ssh in and run tcpdump.

    Since you suspect this is from the UDR itself you should be able to filter for the IP of the UDRs management interface. That should get you destination IPs which will hopefully help track it down.

    Not sure what would cause that sort of traffic, but I know there used to be a WAN speed test on the Unifi main page which could chew up a good amount of traffic. Wouldn’t think it would be constant though.

    Do you have other Unifi devices that might have been adopted with layer 3 adoption? Depending on how you setup layer 3 adoption even if devices are local to your network they might be using hairpin NAT on the ER-X which might look like internet activity destined for the UDR even though it is all local.


  • Named volumes are often the default because there is no chance of them conflicting with other services or containers running on the system.

    Say you deployed two different docker compose apps each with their own MariaDB. With named volumes there is zero chance of those conflicting (at least from the filesystem perspective).

    This also better facilitates easier cleanup. The apps documentation can say “docker compose down -v”, and they are done. Instead of listing a bunch of directories that need to be cleaned up.

    Those lingering directories can also cause problems for users that might have wanted a clean start when their app is broken, but with a bind mount that broken database schema won’t have been deleted for them when they start up the services again.

    All that said, I very much agree that when you go to deploy a docker service you should consider changing the named volumes to standard bind mounts for a couple of reasons.

    • When running production applications I don’t want the volumes to be able to be cleaned up so easily. A little extra protection from accidental deletion is handy.

    • The default location for named volumes doesn’t work well with any advanced partitioning strategies. i.e. if you want your database volume on a different partition than your static web content.

    • Old reason and maybe more user preference at this point but back before the docker overlay2 storage driver had matured we used the btrfs driver instead and occasionally Docker would break and we would need to wipe out the entire /var/lib/docker btrfs filesystem, so I just personally want to keep anything persistent out of that directory.

    So basically application writers should use named volumes to simplify the documentation/installation/maintenance/cleanup of their applications.

    Systems administrators running those applications should know and understand the docker compose well enough to change those settings to make them production ready for their environment. Reading through it and making those changes ends up being part of learning how the containers are structured in the first place.


  • For shared lines like cable and wireless it is often asymmetrical so that everyone gets better speeds, not so they can hold you back.

    For wireless service providers for instance let’s say you have 20 customers on a single access point. Like a walkie-talkie you can’t both transmit and receive at the same time, and no two customers can be transmitting at the same time either.

    So to get around this problem TDMA (time division multiple access) is used. Basically time is split into slices and each user is given a certain percentage of those slices.

    Since the AP is transmitting to everyone it usually gets the bulk of the slices like 60+%. This is the shared download speed for everyone in the network.

    Most users don’t really upload much so giving the user radios equal slices to the AP would be a massive waste of air time, and since there are 20 customers on this theoretical AP every 1mbit cut off of each users upload speed is 20mbit added to the total download capability for anyone downloading on that AP.

    So let’s say we have APs/clients capable of 1000mbit. With 20 users and 1AP if we wanted symmetrical speeds we need 40 equal slots, 20 slots on the AP one for each user to download and 1 slot for each user to upload back. Every user gets 25mbit download and 25mbit upload.

    Contrast that to asymmetrical. Let’s say we do a 80/20 AP/client airtime split. We end up with 800mbit shared download amongst everyone and 10mbit upload per user.

    In the worst case scenario every user is downloading at the same time meaning you get about 40mbit of that 800, still quite the improvement over 25mbit and if some of those people aren’t home or aren’t active at the time that means that much more for those who are active.

    I think the size of the slices is a little more dynamic on more modern systems where AP adjusts the user radios slices on the fly so that idle clients don’t have a bunch of dead air but they still need to have a little time allocated to them for when data does start to flow.

    A quick Google seems to show that DOCSIS cable modems use TDMA as well so this all likely applies to cable users as well.



  • I am assuming this is the LVM volume that Ubuntu creates if you selected the LVM option when installing.

    Think of LVM like a more simple more flexible version of RAID0. It isn’t there to offer redundancy but it take make multiple disks aggregate their storage/performance into a single block device. It doesn’t have all of the performance benefits of RAID0, particularly with sequential reads, but in the cases of fileservers with multiple active users it can probably perform even better than a RAID0 volume would.

    The first thing to do would be to look at what volume groups you have. A volume group is one or more drives that creates a pool of storage that we can allocate space from to create logical volumes. Run vgdisplay and you will get a summary of all of the volume groups. If you see a lot of storage available in the ‘Free PE/Size’ (PE means physical extents) line that means that you have storage in the pool that hasn’t been allocated to a logical volume yet.

    If you have a set of OS disks an a separate set of storage disks it is probably a good idea to create a separate volume group for your storage disks instead of combining them with the OS disks. This keeps the OS and your storage separate so that it is easier to do things like rebuilding the OS, or migrating to new hardware. If you have enough storage to keep your data volumes separate you should consider ZFS or btrfs for those volumes instead of LVM. ZFS/btrfs have a lot of extra features that can protect your data.

    If you don’t have free space then you might be missing additional drives that you want to have added to the pool. You can list all of the physical volume which have been formatted to be used with LVM by running the pvs command. The pvs command show you each formatted drive and if they are associated with a volume group. If you have additional drives that you want to add to your volume group you can run pvcreate /dev/yourvolume to format them.

    Once the new drives have been formatted they need to be added to the volume group. Run vgextend volumegroupname /dev/yourvolume to add the new physical device to your volume group. You should re-run vgdisplay afterwards and verify the new physical extents have been added.

    If you are looking to have redundancy in this storage you would usually build an mdam array and then do the pvcreate on the volume created my mdadm. LVM is usually not used to give you redundancy, other tools are better for that. Typically LVM is used for pooling storage, snapshots, multiple volumes from a large device, etc.

    So one way or another your additional space should be in the volume group now, however that doesn’t make it usable by the OS yet. On top of the volume group we create logical volumes. These are virtual block devices made up of physical extents on the physical disks. If you run lvdisplay you will see a list of logical volumes that were created by the Ubuntu installer which is probably only one by default.

    You can create new logical volumes with the lvcreate command or extend the volume that is already there. Or resize the volume that you already have with lvresize. I see other posts already explained those commands in more detail.

    Once you have extended the logical volume (the virtual block device) you have to extend the filesystem on top of it. That procedure depends on what filesystem you are using on your logical volume. Likely resize2fs for ext4 by default in Ubuntu, or xfs_growfs if you are on XFS.