1 |
DENIED
|
ROLE_USER
|
null |
|
Show voter details
|
2 |
DENIED
|
moderate
|
App\Entity\Entry {#1563
+user: App\Entity\User {#261 …}
+magazine: Proxies\__CG__\App\Entity\Magazine {#1720 …}
+image: null
+domain: Proxies\__CG__\App\Entity\Domain {#1618 …}
+slug: "Connection-to-external-drives-sometimes-breaks-on-reboot"
+title: "Connection to external drives sometimes breaks on reboot"
+url: null
+body: """
I’ve got a reoccurring issue with all of the home servers I’ve ever had and because it happened again just today, now the pain is big enough to ask publicly about it. \n
As of now, I’m running some Intel NUC ripoff with a JBOD attatched via USB 3, spinning a ZFS sort of-RAID. It’s nothing *that* special tbh. In the past I had several other configurations with external drives, wired via `fstab` to Raspberry Pis and the like. All of those shared a similar issue: I can’t recall exactly when, but I figure most of the time after updates to the kernel or docker, the computer(s) become stuck at boot. I had to unplug the external drives just to get the respective machine up, after which varying issues occurred with drives not being recognized anymore and such.\n
\n
With my current setup, I run several docker containers which have their volumes on subdirectories/datasets on the `/tank` mountpoint, and when booting the machine without the drives, some of the containers create new directories at that destination, which now lives on my main drive `/dev/sda`. \n
It’s not only painful to go through the manual process with the drives, I only have access the machine when I’m home, which I’m not all the time. Also, it’s kind of time consuming as I’m backup up data that I fear might become inconsistent along the way. Every time I see a big kernel update, I fear that the computer will get stuck in such a situation once again and I’m reluctant to do a proper reboot.\n
\n
I know that external drives are not best practice when it comes to handling “critical” data, but I don’t want to run another machine just in order to provide access to the disks via network. Any ideas where these issues stem from and how to avoid them in the future?
"""
+type: "article"
+lang: "en"
+isOc: false
+hasEmbed: false
+commentCount: 4
+favouriteCount: 3
+score: 0
+isAdult: false
+sticky: false
+lastActive: DateTime @1699267034 {#1416
date: 2023-11-06 11:37:14.0 +01:00
}
+ip: null
+adaAmount: 0
+tags: null
+mentions: null
+comments: Doctrine\ORM\PersistentCollection {#1625 …}
+votes: Doctrine\ORM\PersistentCollection {#1617 …}
+reports: Doctrine\ORM\PersistentCollection {#1723 …}
+favourites: Doctrine\ORM\PersistentCollection {#2461 …}
+notifications: Doctrine\ORM\PersistentCollection {#2447 …}
+badges: Doctrine\ORM\PersistentCollection {#2471 …}
+children: []
-id: 9090
-titleTs: "'break':6 'connect':1 'drive':4 'extern':3 'reboot':8 'sometim':5"
-bodyTs: "'/dev/sda':190 '/tank':163 '3':51 'access':208,305 'along':241 'also':222 'anoth':298 'anymor':143 'ask':31 'attatch':48 'avoid':321 'backup':232 'becom':115,239 'best':282 'big':28,249 'boot':118,167 'come':286 'comput':113,256 'configur':72 'consum':228 'contain':154,176 'creat':177 'critic':289 'current':148 'data':234,290 'destin':182 'directori':179 'disk':308 'docker':111,153 'drive':75,125,139,172,189,204,279 'enough':29 'ever':15 'everi':244 'exact':96 'extern':74,124,278 'fear':237,253 'figur':100 'fstab':78 'futur':325 'get':128,258 'go':197 'got':3 'handl':288 'happen':20 'home':11,214 'idea':312 'inconsist':240 'intel':42 'issu':6,91,136,315 'jbod':47 'kernel':109,250 'kind':225 'know':276 'like':84 'live':185 'm':39,213,217,231,268 'machin':131,169,210,299 'main':188 'manual':200 'might':238 'mountpoint':164 'network':310 'new':178 'noth':61 'nuc':43 'occur':137 'of-raid':56 'order':302 'pain':26,195 'past':67 'pis':81 'practic':283 'process':201 'proper':273 'provid':304 'public':32 'raid':58 'raspberri':80 'reboot':274 'recal':95 'recogn':142 'reluct':269 'reoccur':5 'respect':130 'ripoff':44 'run':40,151,297 'see':247 'server':12 'setup':149 'sever':70,152 'share':88 'similar':90 'situat':263 'sort':55 'special':63 'spin':52 'stem':316 'stuck':116,259 'subdirectories/datasets':160 'tbh':64 'time':104,221,227,245 'today':23 'unplug':122 'updat':106,251 'usb':50 'vari':135 've':2,14 'via':49,77,309 'volum':158 'want':295 'way':243 'wire':76 'without':170 'zfs':54"
+cross: false
+upVotes: 0
+downVotes: 0
+ranking: 1698884990
+visibility: "visible "
+apId: "https://feddit.de/post/5186026"
+editedAt: null
+createdAt: DateTimeImmutable @1698859490 {#1668
date: 2023-11-01 18:24:50.0 +01:00
}
} |
|
Show voter details
|
3 |
DENIED
|
edit
|
App\Entity\Entry {#1563
+user: App\Entity\User {#261 …}
+magazine: Proxies\__CG__\App\Entity\Magazine {#1720 …}
+image: null
+domain: Proxies\__CG__\App\Entity\Domain {#1618 …}
+slug: "Connection-to-external-drives-sometimes-breaks-on-reboot"
+title: "Connection to external drives sometimes breaks on reboot"
+url: null
+body: """
I’ve got a reoccurring issue with all of the home servers I’ve ever had and because it happened again just today, now the pain is big enough to ask publicly about it. \n
As of now, I’m running some Intel NUC ripoff with a JBOD attatched via USB 3, spinning a ZFS sort of-RAID. It’s nothing *that* special tbh. In the past I had several other configurations with external drives, wired via `fstab` to Raspberry Pis and the like. All of those shared a similar issue: I can’t recall exactly when, but I figure most of the time after updates to the kernel or docker, the computer(s) become stuck at boot. I had to unplug the external drives just to get the respective machine up, after which varying issues occurred with drives not being recognized anymore and such.\n
\n
With my current setup, I run several docker containers which have their volumes on subdirectories/datasets on the `/tank` mountpoint, and when booting the machine without the drives, some of the containers create new directories at that destination, which now lives on my main drive `/dev/sda`. \n
It’s not only painful to go through the manual process with the drives, I only have access the machine when I’m home, which I’m not all the time. Also, it’s kind of time consuming as I’m backup up data that I fear might become inconsistent along the way. Every time I see a big kernel update, I fear that the computer will get stuck in such a situation once again and I’m reluctant to do a proper reboot.\n
\n
I know that external drives are not best practice when it comes to handling “critical” data, but I don’t want to run another machine just in order to provide access to the disks via network. Any ideas where these issues stem from and how to avoid them in the future?
"""
+type: "article"
+lang: "en"
+isOc: false
+hasEmbed: false
+commentCount: 4
+favouriteCount: 3
+score: 0
+isAdult: false
+sticky: false
+lastActive: DateTime @1699267034 {#1416
date: 2023-11-06 11:37:14.0 +01:00
}
+ip: null
+adaAmount: 0
+tags: null
+mentions: null
+comments: Doctrine\ORM\PersistentCollection {#1625 …}
+votes: Doctrine\ORM\PersistentCollection {#1617 …}
+reports: Doctrine\ORM\PersistentCollection {#1723 …}
+favourites: Doctrine\ORM\PersistentCollection {#2461 …}
+notifications: Doctrine\ORM\PersistentCollection {#2447 …}
+badges: Doctrine\ORM\PersistentCollection {#2471 …}
+children: []
-id: 9090
-titleTs: "'break':6 'connect':1 'drive':4 'extern':3 'reboot':8 'sometim':5"
-bodyTs: "'/dev/sda':190 '/tank':163 '3':51 'access':208,305 'along':241 'also':222 'anoth':298 'anymor':143 'ask':31 'attatch':48 'avoid':321 'backup':232 'becom':115,239 'best':282 'big':28,249 'boot':118,167 'come':286 'comput':113,256 'configur':72 'consum':228 'contain':154,176 'creat':177 'critic':289 'current':148 'data':234,290 'destin':182 'directori':179 'disk':308 'docker':111,153 'drive':75,125,139,172,189,204,279 'enough':29 'ever':15 'everi':244 'exact':96 'extern':74,124,278 'fear':237,253 'figur':100 'fstab':78 'futur':325 'get':128,258 'go':197 'got':3 'handl':288 'happen':20 'home':11,214 'idea':312 'inconsist':240 'intel':42 'issu':6,91,136,315 'jbod':47 'kernel':109,250 'kind':225 'know':276 'like':84 'live':185 'm':39,213,217,231,268 'machin':131,169,210,299 'main':188 'manual':200 'might':238 'mountpoint':164 'network':310 'new':178 'noth':61 'nuc':43 'occur':137 'of-raid':56 'order':302 'pain':26,195 'past':67 'pis':81 'practic':283 'process':201 'proper':273 'provid':304 'public':32 'raid':58 'raspberri':80 'reboot':274 'recal':95 'recogn':142 'reluct':269 'reoccur':5 'respect':130 'ripoff':44 'run':40,151,297 'see':247 'server':12 'setup':149 'sever':70,152 'share':88 'similar':90 'situat':263 'sort':55 'special':63 'spin':52 'stem':316 'stuck':116,259 'subdirectories/datasets':160 'tbh':64 'time':104,221,227,245 'today':23 'unplug':122 'updat':106,251 'usb':50 'vari':135 've':2,14 'via':49,77,309 'volum':158 'want':295 'way':243 'wire':76 'without':170 'zfs':54"
+cross: false
+upVotes: 0
+downVotes: 0
+ranking: 1698884990
+visibility: "visible "
+apId: "https://feddit.de/post/5186026"
+editedAt: null
+createdAt: DateTimeImmutable @1698859490 {#1668
date: 2023-11-01 18:24:50.0 +01:00
}
} |
|
Show voter details
|
4 |
DENIED
|
moderate
|
App\Entity\Entry {#1563
+user: App\Entity\User {#261 …}
+magazine: Proxies\__CG__\App\Entity\Magazine {#1720 …}
+image: null
+domain: Proxies\__CG__\App\Entity\Domain {#1618 …}
+slug: "Connection-to-external-drives-sometimes-breaks-on-reboot"
+title: "Connection to external drives sometimes breaks on reboot"
+url: null
+body: """
I’ve got a reoccurring issue with all of the home servers I’ve ever had and because it happened again just today, now the pain is big enough to ask publicly about it. \n
As of now, I’m running some Intel NUC ripoff with a JBOD attatched via USB 3, spinning a ZFS sort of-RAID. It’s nothing *that* special tbh. In the past I had several other configurations with external drives, wired via `fstab` to Raspberry Pis and the like. All of those shared a similar issue: I can’t recall exactly when, but I figure most of the time after updates to the kernel or docker, the computer(s) become stuck at boot. I had to unplug the external drives just to get the respective machine up, after which varying issues occurred with drives not being recognized anymore and such.\n
\n
With my current setup, I run several docker containers which have their volumes on subdirectories/datasets on the `/tank` mountpoint, and when booting the machine without the drives, some of the containers create new directories at that destination, which now lives on my main drive `/dev/sda`. \n
It’s not only painful to go through the manual process with the drives, I only have access the machine when I’m home, which I’m not all the time. Also, it’s kind of time consuming as I’m backup up data that I fear might become inconsistent along the way. Every time I see a big kernel update, I fear that the computer will get stuck in such a situation once again and I’m reluctant to do a proper reboot.\n
\n
I know that external drives are not best practice when it comes to handling “critical” data, but I don’t want to run another machine just in order to provide access to the disks via network. Any ideas where these issues stem from and how to avoid them in the future?
"""
+type: "article"
+lang: "en"
+isOc: false
+hasEmbed: false
+commentCount: 4
+favouriteCount: 3
+score: 0
+isAdult: false
+sticky: false
+lastActive: DateTime @1699267034 {#1416
date: 2023-11-06 11:37:14.0 +01:00
}
+ip: null
+adaAmount: 0
+tags: null
+mentions: null
+comments: Doctrine\ORM\PersistentCollection {#1625 …}
+votes: Doctrine\ORM\PersistentCollection {#1617 …}
+reports: Doctrine\ORM\PersistentCollection {#1723 …}
+favourites: Doctrine\ORM\PersistentCollection {#2461 …}
+notifications: Doctrine\ORM\PersistentCollection {#2447 …}
+badges: Doctrine\ORM\PersistentCollection {#2471 …}
+children: []
-id: 9090
-titleTs: "'break':6 'connect':1 'drive':4 'extern':3 'reboot':8 'sometim':5"
-bodyTs: "'/dev/sda':190 '/tank':163 '3':51 'access':208,305 'along':241 'also':222 'anoth':298 'anymor':143 'ask':31 'attatch':48 'avoid':321 'backup':232 'becom':115,239 'best':282 'big':28,249 'boot':118,167 'come':286 'comput':113,256 'configur':72 'consum':228 'contain':154,176 'creat':177 'critic':289 'current':148 'data':234,290 'destin':182 'directori':179 'disk':308 'docker':111,153 'drive':75,125,139,172,189,204,279 'enough':29 'ever':15 'everi':244 'exact':96 'extern':74,124,278 'fear':237,253 'figur':100 'fstab':78 'futur':325 'get':128,258 'go':197 'got':3 'handl':288 'happen':20 'home':11,214 'idea':312 'inconsist':240 'intel':42 'issu':6,91,136,315 'jbod':47 'kernel':109,250 'kind':225 'know':276 'like':84 'live':185 'm':39,213,217,231,268 'machin':131,169,210,299 'main':188 'manual':200 'might':238 'mountpoint':164 'network':310 'new':178 'noth':61 'nuc':43 'occur':137 'of-raid':56 'order':302 'pain':26,195 'past':67 'pis':81 'practic':283 'process':201 'proper':273 'provid':304 'public':32 'raid':58 'raspberri':80 'reboot':274 'recal':95 'recogn':142 'reluct':269 'reoccur':5 'respect':130 'ripoff':44 'run':40,151,297 'see':247 'server':12 'setup':149 'sever':70,152 'share':88 'similar':90 'situat':263 'sort':55 'special':63 'spin':52 'stem':316 'stuck':116,259 'subdirectories/datasets':160 'tbh':64 'time':104,221,227,245 'today':23 'unplug':122 'updat':106,251 'usb':50 'vari':135 've':2,14 'via':49,77,309 'volum':158 'want':295 'way':243 'wire':76 'without':170 'zfs':54"
+cross: false
+upVotes: 0
+downVotes: 0
+ranking: 1698884990
+visibility: "visible "
+apId: "https://feddit.de/post/5186026"
+editedAt: null
+createdAt: DateTimeImmutable @1698859490 {#1668
date: 2023-11-01 18:24:50.0 +01:00
}
} |
|
Show voter details
|
5 |
DENIED
|
ROLE_USER
|
null |
|
Show voter details
|
6 |
DENIED
|
moderate
|
App\Entity\Entry {#2359
+user: App\Entity\User {#261 …}
+magazine: Proxies\__CG__\App\Entity\Magazine {#2364 …}
+image: Proxies\__CG__\App\Entity\Image {#2361 …}
+domain: Proxies\__CG__\App\Entity\Domain {#2363 …}
+slug: "I-dearly-needed-to-hear-that"
+title: "I dearly needed to hear that"
+url: "https://feddit.de/pictrs/image/18ba946c-5116-49ed-8acf-50e0330c792f.png"
+body: null
+type: "image"
+lang: "en"
+isOc: false
+hasEmbed: true
+commentCount: 0
+favouriteCount: 0
+score: 0
+isAdult: false
+sticky: false
+lastActive: DateTime @1690288627 {#2470
date: 2023-07-25 14:37:07.0 +02:00
}
+ip: null
+adaAmount: 0
+tags: null
+mentions: null
+comments: Doctrine\ORM\PersistentCollection {#2360 …}
+votes: Doctrine\ORM\PersistentCollection {#2382 …}
+reports: Doctrine\ORM\PersistentCollection {#2384 …}
+favourites: Doctrine\ORM\PersistentCollection {#2379 …}
+notifications: Doctrine\ORM\PersistentCollection {#2383 …}
+badges: Doctrine\ORM\PersistentCollection {#1906 …}
+children: []
-id: 25596
-titleTs: "'dear':2 'hear':5 'need':3"
-bodyTs: null
+cross: false
+upVotes: 0
+downVotes: 0
+ranking: 0
+visibility: "visible "
+apId: "https://feddit.de/post/1845241"
+editedAt: null
+createdAt: DateTimeImmutable @1690288627 {#2455
date: 2023-07-25 14:37:07.0 +02:00
}
} |
|
Show voter details
|
7 |
DENIED
|
edit
|
App\Entity\Entry {#2359
+user: App\Entity\User {#261 …}
+magazine: Proxies\__CG__\App\Entity\Magazine {#2364 …}
+image: Proxies\__CG__\App\Entity\Image {#2361 …}
+domain: Proxies\__CG__\App\Entity\Domain {#2363 …}
+slug: "I-dearly-needed-to-hear-that"
+title: "I dearly needed to hear that"
+url: "https://feddit.de/pictrs/image/18ba946c-5116-49ed-8acf-50e0330c792f.png"
+body: null
+type: "image"
+lang: "en"
+isOc: false
+hasEmbed: true
+commentCount: 0
+favouriteCount: 0
+score: 0
+isAdult: false
+sticky: false
+lastActive: DateTime @1690288627 {#2470
date: 2023-07-25 14:37:07.0 +02:00
}
+ip: null
+adaAmount: 0
+tags: null
+mentions: null
+comments: Doctrine\ORM\PersistentCollection {#2360 …}
+votes: Doctrine\ORM\PersistentCollection {#2382 …}
+reports: Doctrine\ORM\PersistentCollection {#2384 …}
+favourites: Doctrine\ORM\PersistentCollection {#2379 …}
+notifications: Doctrine\ORM\PersistentCollection {#2383 …}
+badges: Doctrine\ORM\PersistentCollection {#1906 …}
+children: []
-id: 25596
-titleTs: "'dear':2 'hear':5 'need':3"
-bodyTs: null
+cross: false
+upVotes: 0
+downVotes: 0
+ranking: 0
+visibility: "visible "
+apId: "https://feddit.de/post/1845241"
+editedAt: null
+createdAt: DateTimeImmutable @1690288627 {#2455
date: 2023-07-25 14:37:07.0 +02:00
}
} |
|
Show voter details
|
8 |
DENIED
|
moderate
|
App\Entity\Entry {#2359
+user: App\Entity\User {#261 …}
+magazine: Proxies\__CG__\App\Entity\Magazine {#2364 …}
+image: Proxies\__CG__\App\Entity\Image {#2361 …}
+domain: Proxies\__CG__\App\Entity\Domain {#2363 …}
+slug: "I-dearly-needed-to-hear-that"
+title: "I dearly needed to hear that"
+url: "https://feddit.de/pictrs/image/18ba946c-5116-49ed-8acf-50e0330c792f.png"
+body: null
+type: "image"
+lang: "en"
+isOc: false
+hasEmbed: true
+commentCount: 0
+favouriteCount: 0
+score: 0
+isAdult: false
+sticky: false
+lastActive: DateTime @1690288627 {#2470
date: 2023-07-25 14:37:07.0 +02:00
}
+ip: null
+adaAmount: 0
+tags: null
+mentions: null
+comments: Doctrine\ORM\PersistentCollection {#2360 …}
+votes: Doctrine\ORM\PersistentCollection {#2382 …}
+reports: Doctrine\ORM\PersistentCollection {#2384 …}
+favourites: Doctrine\ORM\PersistentCollection {#2379 …}
+notifications: Doctrine\ORM\PersistentCollection {#2383 …}
+badges: Doctrine\ORM\PersistentCollection {#1906 …}
+children: []
-id: 25596
-titleTs: "'dear':2 'hear':5 'need':3"
-bodyTs: null
+cross: false
+upVotes: 0
+downVotes: 0
+ranking: 0
+visibility: "visible "
+apId: "https://feddit.de/post/1845241"
+editedAt: null
+createdAt: DateTimeImmutable @1690288627 {#2455
date: 2023-07-25 14:37:07.0 +02:00
}
} |
|
Show voter details
|