• 0 Posts
  • 33 Comments
Joined 1 year ago
cake
Cake day: June 14th, 2023

help-circle
  • Most hubs didn’t protect you from anything in particular.

    Most of them would forward everything to every port, some really insane ones would strip out the spanning tree that could have prevented a loop.

    It’s been a long time since I did anything that goes as far into a network as the desktop, but 15+ years ago we had a customer ring up with the same sort of complaint. After we followed the breadcrumbs on site we found a little 8 port hub ( that we hadn’t supplied ) plugged into two wall ports that went to two different Cisco edge switches in the server room, two cisco phones also with their passthrough ports both patched into same switch and then two desktop PC’s.

    Amazing.


  • I replaced mythtv with tvheadend on the backend and kodi on the frontend like 5 or 6 years ago.

    The setup and configuration at the time on mythtv was slanted towards old ( obsolete ) analog tuners and static setup and tvheadend was like a breath of fresh air in comparison where you could point it at a DVB mux or two and it would mostly do what you want without having to fight it.

    I’m not sure how much longer I’ll want something that can tune DVB-S2 and DVB-T though. Jellyfin and friends handle everything other than legacy TV better than kodi these days.


  • I don’t have a good answer for you.

    DHCPv6 is pretty well the only good way to have a prefix delegated by your ISP and have it chopped up and deployed in an automated fashion through multiple layers of an edge network. I’m also a real fan of the audit trail in the logs that results from a stateful transaction.

    Some background info if you haven’t run into it though is described by this google issue tracker id: https://issuetracker.google.com/issues/36949085. The summary is that one guy at google is obstructing DHCPv6 being implemented on android.

    I’ve built out a bunch of IPv6 networks that implement DHCPv6 on the edge. I personally use a whole lot of android devices and none of them get IPv6 addresses, pretty well everything else does. I’m mostly cool with it at this point, eventually the guy who is obstructing IPv6 at google will move on.


  • The most impressed I’ve been with hardware encoding and decoding is with the built in graphics on my little NUC.

    I’m using a NUC10i5FNH which was only barely able to transcode one vaguely decent bitrate stream in software. It looked like passing the hardware transcoding through to a VM was too messy for me so I decided to reinstall linux straight on the hardware.

    The hardware encoding and decoding performance was absolutely amazing. I must have opened up about 20 jellyfin windows that were transcoding before I gave up trying and called it good enough. I only really need about 4 maximum.

    The graphics on the 10th generation NUC’s is the same sort of thing that is on the 9th gen and 10th gen desktop cpu’s, so if you have and intel cpu with onboard graphics give it a try.

    It’s way less trouble than the last time I built a similar setup with NVidia. I haven’t tried a Radeon card yet, but the jellyfin docs are a bit more negative about AMD.



  • I just read the update to the post saying that the issue has been narrowed down to the NTFS driver. I haven’t used NTFS on linux since the NTFS fuse driver was brand new and still wonky as hell something like 15 years ago, so I don’t know much about it.

    However, it sounds like the in kernel driver was still pretty fresh in 5.15, so doing as you have suggested and trying out a 6.5 kernel instead is a pretty good call.


  • If you haven’t already, try running hdparm on your drive to get an idea of if the drives are at least doing large raw reads straight off the disk at an appropriate performance level.

    This is output from the little NUC I’m using right now:

    # lsblk
    NAME   MAJ:MIN RM   SIZE RO TYPE MOUNTPOINTS
    sda      8:0    0 465.8G  0 disk 
    ├─sda1   8:1    0   512M  0 part /boot/efi
    ├─sda2   8:2    0 464.3G  0 part /
    └─sda3   8:3    0   976M  0 part [SWAP]
    
    # hdparm -i /dev/sda
    
    /dev/sda:
    
     Model=Samsung SSD 860 EVO 500GB, FwRev=RVT02B6Q, SerialNo=S3YANB0KB24583B
    ...
    
    # hdparm -t /dev/sda
    
    /dev/sda:
     Timing buffered disk reads: 1526 MB in  3.00 seconds = 508.21 MB/sec
    
    

    If your results are really poor for this test then it points more at the drive / cable / controller / linux controller driver.

    If the results are okay, then the issue is probably something more like a logical partitioning / filesystem driver issue.

    I’m not sure what a good benchmark application for Linux that tests the filesystem layer as well is other than bonnie++ which has been around forever. Someone else might have a more current idea of something to use for this.


  • It might help for the folks here to know which brand and model of SSDs you have, what sort of sata controllers the sata ones are plugged into and what sort of cpu and motherboard the nvme one is connected to.

    What I can say is Ubuntu 22.04 doesn’t have some mystery problem with SSDs. I work in a place where we have in the order of 100 Ubuntu 22.04 installs running with SSDs, all either older intel ones or newer samsung ones. They go great.




  • deadbeef@lemmy.nztoTechnology@lemmy.worldUnsmart a smart TV
    link
    fedilink
    English
    arrow-up
    11
    ·
    edit-2
    3 months ago

    The samsung TV that I bought for my son had this annoying overlay thing that pops up when you turn it on that shows all the different inputs and nags about various things it thinks are wrong with the world. It is plugged into an Nvidia shield that we do most things on, but you can’t use the shield until the overlay calms the fuck down and disappears.

    It’d be great if you could just have the thing turn on and display an input like our older TVs do.


  • Which workflows? Asking because I’d like to experiment with some edge case stuff.

    I’m running KDE with wayland on multiple different vintage machines with AMD and intel graphics and it would take alot for me to go back to the depressing old mess that was X.

    The biggest improvement in recent times was absolutely pulling out all my Nvidia cards and putting in second hand Radeon cards, but switching to wayland fixed all the dumb interactions between VRR ( and HDR ) capable monitors of mixed refresh rates.

    Even the little NUC that drives the three 4k TV’s for the security cameras at work is a little happier with wayland, running for weeks now with hardware decoding, rather than X crashing pretty well every few days.




  • I have an A1502 Macbook that I have been using for work since it was new in 2014. It triple boots Windows, Linux and OSX, but I only really use Linux.

    Mine has the same CPU, a i5-4308U but 16GB of memory, I think it was a custom order at the time.

    If I recall I did the regular bootcamp process you would do to install Windows, installed Windows on a subset of the free space and Linux on the rest.

    I’ve got Linux mint 21 on it currently, but I have had vanilla Ubuntu at different times. I can’t think of anything on it that doesn’t just work off hand.



  • deadbeef@lemmy.nztoLinux@lemmy.mlI tried, I really did
    link
    fedilink
    arrow-up
    1
    arrow-down
    1
    ·
    5 months ago

    I’m not the PR department for desktop Linux for everyone man.

    People who only have Windows experience see an Nvidia card that is premium priced product with a premium experience and think that this will translate to a Linux environment, it does not. I’ve been using Linux for like 27 years now and that was my opinion until a couple of years ago.

    Hopefully the folks that might read this thread ( like the OP 20 year IT veteran ) can take away that Nvidia cards in linux are the troublesome / subpar choice and are only going to get worse going forwards ( because of the Wayland migration that Nvidia are ignoring ).


  • deadbeef@lemmy.nztoLinux@lemmy.mlI tried, I really did
    link
    fedilink
    arrow-up
    3
    arrow-down
    1
    ·
    5 months ago

    Oh yeah. That video of Linus Torvalds giving Nvidia the finger linked elsewhere in this thread was the result of a ton of frustration around them hiding programming info. They also popularised a dodgy system of LGPL’ing a shim which acted as the licence go-between the kernel driver API ( drivers are supposed to be GPL’d ) and their proprietary obfuscated code.

    Despite that, I’m not really that anti them as a company. For me, the pragmatic reality is that spending a few hundred bucks on a Radeon is so much better than wasting hours performing arcane acts of fault finding and trial and error.


  • deadbeef@lemmy.nztoLinux@lemmy.mlI tried, I really did
    link
    fedilink
    arrow-up
    2
    ·
    edit-2
    5 months ago

    If you go back a bit further, multi monitor support was just fine. Our office in about 2002 was full of folks running dual ( 19 inch tube! ) monitors running off matrox g400’s with xinerama on redhat 6.2 ( might have been 7.0 ). I can’t recall that being much trouble at all.

    There were even a bunch of good years of the proprietry nvidia drivers, the poor quality is something that I’ve only really noticed in the last three or so years.


  • deadbeef@lemmy.nztoLinux@lemmy.mlI tried, I really did
    link
    fedilink
    arrow-up
    11
    arrow-down
    1
    ·
    edit-2
    5 months ago

    The support for larger numbers of monitors and mixed resolutions and odd layouts in KDE vastly improved in the ubuntu 23.04 release. I wouldn’t install anything other than the latest LTS release for a server ( and generally a desktop ), but KDE was so much better that it was worth running something newer with the short term aupport on my desktops.

    We aren’t too far off the next LTS that will include that work anyway I guess. I’m probably going to be making the move to debian rather than trying that one out though.