| 2 |
DENIED
|
moderate
|
App\Entity\Entry {#1551
+user: App\Entity\User {#265 …}
+magazine: Proxies\__CG__\App\Entity\Magazine {#1725 …}
+image: null
+domain: Proxies\__CG__\App\Entity\Domain {#1702 …}
+slug: "Question-ZFS-and-rsync"
+title: "Question - ZFS and rsync"
+url: null
+body: """
Hey fellow Selfhosters! I need some help, I think, and searching isn’t yielding what I’m hoping for.\n
\n
I recently built a new NAS for my network with 4x 18TB drives in a ZFS raidz1 pool. I previously have been using an external USB 12TB harddrive attached to a different machine.\n
\n
I’ve been attempting to use rsync to get the 12TB drive copied over to the new pool and things go great for the first 30-45 minutes. At that point, the current copy speed diminishes and 4 current files in progress sit at 100% done. Eventually, I’ve had to reboot the machine, because the zpool doesn’t appear accessible any longer. After reboot, the pool appears fine, no faults, and I can resume rsync for a while.\n
\n
EDIT: Of note, the rsync process seems to stall and I can’t get it to respect SIGINT or Ctrl+C. I can SSH in separately and running `zpool status` hangs with no output.\n
\n
While the workaround seems to be partially successful, the point of using rsync is to make it fairly hands-free and it’s been a week long process to copy the 3TB that I have now. I don’t think my zpool should be disappearing like that! Makes me nervous about the long-term viability. I don’t think I’m ready to drop down on Unraid.\n
\n
rsync is being initiated from the NAS to copy from the old server, am I better off “pushing” than “pulling”? I can’t imagine it’d make much difference.\n
\n
Could my drives be bad? How could I tell? They’re attached to a 10 port SATA card, could that be defective? How would I tell?\n
\n
Thanks for any help! I’ve dabbled in linux for a long time, but I’m far from proficient, so I don’t really know the intricacies of dmesg et al.
"""
+type: "article"
+lang: "en"
+isOc: false
+hasEmbed: false
+commentCount: 17
+favouriteCount: 36
+score: 0
+isAdult: false
+sticky: false
+lastActive: DateTime @1711558563 {#1574
date: 2024-03-27 17:56:03.0 +01:00
}
+ip: null
+adaAmount: 0
+tags: null
+mentions: null
+comments: Doctrine\ORM\PersistentCollection {#1701 …}
+votes: Doctrine\ORM\PersistentCollection {#1687 …}
+reports: Doctrine\ORM\PersistentCollection {#1686 …}
+favourites: Doctrine\ORM\PersistentCollection {#1719 …}
+notifications: Doctrine\ORM\PersistentCollection {#1735 …}
+badges: Doctrine\ORM\PersistentCollection {#2463 …}
+children: []
-id: 25403
-titleTs: "'question':1 'rsync':4 'zfs':2"
-bodyTs: "'-45':79 '10':278 '100':97 '12tb':46,63 '18tb':31 '30':78 '3tb':198 '4':90 '4x':30 'access':113 'al':320 'appear':112,120 'attach':48,275 'attempt':56 'bad':268 'better':250 'built':22 'c':152 'card':281 'copi':65,86,196,243 'could':264,270,282 'ctrl':151 'current':85,91 'd':260 'dabbl':296 'defect':285 'differ':51,263 'diminish':88 'disappear':211 'dmesg':318 'doesn':110 'done':98 'drive':32,64,266 'drop':231 'edit':132 'et':319 'eventu':99 'extern':44 'fair':183 'far':306 'fault':123 'fellow':2 'file':92 'fine':121 'first':77 'free':186 'get':61,145 'go':73 'great':74 'hand':185 'hands-fre':184 'hang':162 'harddriv':47 'help':7,293 'hey':1 'hope':18 'imagin':258 'initi':238 'intricaci':316 'isn':12 'know':314 'like':212 'linux':298 'long':193,220,301 'long-term':219 'longer':115 'm':17,228,305 'machin':52,106 'make':181,214,261 'minut':80 'much':262 'nas':25,241 'need':5 'nervous':216 'network':28 'new':24,69 'note':134 'old':246 'output':165 'partial':172 'point':83,175 'pool':37,70,119 'port':279 'previous':39 'process':137,194 'profici':308 'progress':94 'pull':254 'push':252 'raidz1':36 're':274 'readi':229 'realli':313 'reboot':104,117 'recent':21 'respect':148 'resum':127 'rsync':59,128,136,178,235 'run':159 'sata':280 'search':11 'seem':138,169 'selfhost':3 'separ':157 'server':247 'sigint':149 'sit':95 'speed':87 'ssh':155 'stall':140 'status':161 'success':173 'tell':272,289 'term':221 'thank':290 'thing':72 'think':9,206,226 'time':302 'unraid':234 'usb':45 'use':42,58,177 've':54,101,295 'viabil':222 'week':192 'workaround':168 'would':287 'yield':14 'zfs':35 'zpool':109,160,208"
+cross: false
+upVotes: 0
+downVotes: 0
+ranking: 1703862554
+visibility: "visible "
+apId: "https://lemmy.world/post/10061829"
+editedAt: DateTimeImmutable @1708796667 {#1409
date: 2024-02-24 18:44:27.0 +01:00
}
+createdAt: DateTimeImmutable @1703776154 {#1586
date: 2023-12-28 16:09:14.0 +01:00
}
} |
| 3 |
DENIED
|
edit
|
App\Entity\Entry {#1551
+user: App\Entity\User {#265 …}
+magazine: Proxies\__CG__\App\Entity\Magazine {#1725 …}
+image: null
+domain: Proxies\__CG__\App\Entity\Domain {#1702 …}
+slug: "Question-ZFS-and-rsync"
+title: "Question - ZFS and rsync"
+url: null
+body: """
Hey fellow Selfhosters! I need some help, I think, and searching isn’t yielding what I’m hoping for.\n
\n
I recently built a new NAS for my network with 4x 18TB drives in a ZFS raidz1 pool. I previously have been using an external USB 12TB harddrive attached to a different machine.\n
\n
I’ve been attempting to use rsync to get the 12TB drive copied over to the new pool and things go great for the first 30-45 minutes. At that point, the current copy speed diminishes and 4 current files in progress sit at 100% done. Eventually, I’ve had to reboot the machine, because the zpool doesn’t appear accessible any longer. After reboot, the pool appears fine, no faults, and I can resume rsync for a while.\n
\n
EDIT: Of note, the rsync process seems to stall and I can’t get it to respect SIGINT or Ctrl+C. I can SSH in separately and running `zpool status` hangs with no output.\n
\n
While the workaround seems to be partially successful, the point of using rsync is to make it fairly hands-free and it’s been a week long process to copy the 3TB that I have now. I don’t think my zpool should be disappearing like that! Makes me nervous about the long-term viability. I don’t think I’m ready to drop down on Unraid.\n
\n
rsync is being initiated from the NAS to copy from the old server, am I better off “pushing” than “pulling”? I can’t imagine it’d make much difference.\n
\n
Could my drives be bad? How could I tell? They’re attached to a 10 port SATA card, could that be defective? How would I tell?\n
\n
Thanks for any help! I’ve dabbled in linux for a long time, but I’m far from proficient, so I don’t really know the intricacies of dmesg et al.
"""
+type: "article"
+lang: "en"
+isOc: false
+hasEmbed: false
+commentCount: 17
+favouriteCount: 36
+score: 0
+isAdult: false
+sticky: false
+lastActive: DateTime @1711558563 {#1574
date: 2024-03-27 17:56:03.0 +01:00
}
+ip: null
+adaAmount: 0
+tags: null
+mentions: null
+comments: Doctrine\ORM\PersistentCollection {#1701 …}
+votes: Doctrine\ORM\PersistentCollection {#1687 …}
+reports: Doctrine\ORM\PersistentCollection {#1686 …}
+favourites: Doctrine\ORM\PersistentCollection {#1719 …}
+notifications: Doctrine\ORM\PersistentCollection {#1735 …}
+badges: Doctrine\ORM\PersistentCollection {#2463 …}
+children: []
-id: 25403
-titleTs: "'question':1 'rsync':4 'zfs':2"
-bodyTs: "'-45':79 '10':278 '100':97 '12tb':46,63 '18tb':31 '30':78 '3tb':198 '4':90 '4x':30 'access':113 'al':320 'appear':112,120 'attach':48,275 'attempt':56 'bad':268 'better':250 'built':22 'c':152 'card':281 'copi':65,86,196,243 'could':264,270,282 'ctrl':151 'current':85,91 'd':260 'dabbl':296 'defect':285 'differ':51,263 'diminish':88 'disappear':211 'dmesg':318 'doesn':110 'done':98 'drive':32,64,266 'drop':231 'edit':132 'et':319 'eventu':99 'extern':44 'fair':183 'far':306 'fault':123 'fellow':2 'file':92 'fine':121 'first':77 'free':186 'get':61,145 'go':73 'great':74 'hand':185 'hands-fre':184 'hang':162 'harddriv':47 'help':7,293 'hey':1 'hope':18 'imagin':258 'initi':238 'intricaci':316 'isn':12 'know':314 'like':212 'linux':298 'long':193,220,301 'long-term':219 'longer':115 'm':17,228,305 'machin':52,106 'make':181,214,261 'minut':80 'much':262 'nas':25,241 'need':5 'nervous':216 'network':28 'new':24,69 'note':134 'old':246 'output':165 'partial':172 'point':83,175 'pool':37,70,119 'port':279 'previous':39 'process':137,194 'profici':308 'progress':94 'pull':254 'push':252 'raidz1':36 're':274 'readi':229 'realli':313 'reboot':104,117 'recent':21 'respect':148 'resum':127 'rsync':59,128,136,178,235 'run':159 'sata':280 'search':11 'seem':138,169 'selfhost':3 'separ':157 'server':247 'sigint':149 'sit':95 'speed':87 'ssh':155 'stall':140 'status':161 'success':173 'tell':272,289 'term':221 'thank':290 'thing':72 'think':9,206,226 'time':302 'unraid':234 'usb':45 'use':42,58,177 've':54,101,295 'viabil':222 'week':192 'workaround':168 'would':287 'yield':14 'zfs':35 'zpool':109,160,208"
+cross: false
+upVotes: 0
+downVotes: 0
+ranking: 1703862554
+visibility: "visible "
+apId: "https://lemmy.world/post/10061829"
+editedAt: DateTimeImmutable @1708796667 {#1409
date: 2024-02-24 18:44:27.0 +01:00
}
+createdAt: DateTimeImmutable @1703776154 {#1586
date: 2023-12-28 16:09:14.0 +01:00
}
} |
| 4 |
DENIED
|
moderate
|
App\Entity\Entry {#1551
+user: App\Entity\User {#265 …}
+magazine: Proxies\__CG__\App\Entity\Magazine {#1725 …}
+image: null
+domain: Proxies\__CG__\App\Entity\Domain {#1702 …}
+slug: "Question-ZFS-and-rsync"
+title: "Question - ZFS and rsync"
+url: null
+body: """
Hey fellow Selfhosters! I need some help, I think, and searching isn’t yielding what I’m hoping for.\n
\n
I recently built a new NAS for my network with 4x 18TB drives in a ZFS raidz1 pool. I previously have been using an external USB 12TB harddrive attached to a different machine.\n
\n
I’ve been attempting to use rsync to get the 12TB drive copied over to the new pool and things go great for the first 30-45 minutes. At that point, the current copy speed diminishes and 4 current files in progress sit at 100% done. Eventually, I’ve had to reboot the machine, because the zpool doesn’t appear accessible any longer. After reboot, the pool appears fine, no faults, and I can resume rsync for a while.\n
\n
EDIT: Of note, the rsync process seems to stall and I can’t get it to respect SIGINT or Ctrl+C. I can SSH in separately and running `zpool status` hangs with no output.\n
\n
While the workaround seems to be partially successful, the point of using rsync is to make it fairly hands-free and it’s been a week long process to copy the 3TB that I have now. I don’t think my zpool should be disappearing like that! Makes me nervous about the long-term viability. I don’t think I’m ready to drop down on Unraid.\n
\n
rsync is being initiated from the NAS to copy from the old server, am I better off “pushing” than “pulling”? I can’t imagine it’d make much difference.\n
\n
Could my drives be bad? How could I tell? They’re attached to a 10 port SATA card, could that be defective? How would I tell?\n
\n
Thanks for any help! I’ve dabbled in linux for a long time, but I’m far from proficient, so I don’t really know the intricacies of dmesg et al.
"""
+type: "article"
+lang: "en"
+isOc: false
+hasEmbed: false
+commentCount: 17
+favouriteCount: 36
+score: 0
+isAdult: false
+sticky: false
+lastActive: DateTime @1711558563 {#1574
date: 2024-03-27 17:56:03.0 +01:00
}
+ip: null
+adaAmount: 0
+tags: null
+mentions: null
+comments: Doctrine\ORM\PersistentCollection {#1701 …}
+votes: Doctrine\ORM\PersistentCollection {#1687 …}
+reports: Doctrine\ORM\PersistentCollection {#1686 …}
+favourites: Doctrine\ORM\PersistentCollection {#1719 …}
+notifications: Doctrine\ORM\PersistentCollection {#1735 …}
+badges: Doctrine\ORM\PersistentCollection {#2463 …}
+children: []
-id: 25403
-titleTs: "'question':1 'rsync':4 'zfs':2"
-bodyTs: "'-45':79 '10':278 '100':97 '12tb':46,63 '18tb':31 '30':78 '3tb':198 '4':90 '4x':30 'access':113 'al':320 'appear':112,120 'attach':48,275 'attempt':56 'bad':268 'better':250 'built':22 'c':152 'card':281 'copi':65,86,196,243 'could':264,270,282 'ctrl':151 'current':85,91 'd':260 'dabbl':296 'defect':285 'differ':51,263 'diminish':88 'disappear':211 'dmesg':318 'doesn':110 'done':98 'drive':32,64,266 'drop':231 'edit':132 'et':319 'eventu':99 'extern':44 'fair':183 'far':306 'fault':123 'fellow':2 'file':92 'fine':121 'first':77 'free':186 'get':61,145 'go':73 'great':74 'hand':185 'hands-fre':184 'hang':162 'harddriv':47 'help':7,293 'hey':1 'hope':18 'imagin':258 'initi':238 'intricaci':316 'isn':12 'know':314 'like':212 'linux':298 'long':193,220,301 'long-term':219 'longer':115 'm':17,228,305 'machin':52,106 'make':181,214,261 'minut':80 'much':262 'nas':25,241 'need':5 'nervous':216 'network':28 'new':24,69 'note':134 'old':246 'output':165 'partial':172 'point':83,175 'pool':37,70,119 'port':279 'previous':39 'process':137,194 'profici':308 'progress':94 'pull':254 'push':252 'raidz1':36 're':274 'readi':229 'realli':313 'reboot':104,117 'recent':21 'respect':148 'resum':127 'rsync':59,128,136,178,235 'run':159 'sata':280 'search':11 'seem':138,169 'selfhost':3 'separ':157 'server':247 'sigint':149 'sit':95 'speed':87 'ssh':155 'stall':140 'status':161 'success':173 'tell':272,289 'term':221 'thank':290 'thing':72 'think':9,206,226 'time':302 'unraid':234 'usb':45 'use':42,58,177 've':54,101,295 'viabil':222 'week':192 'workaround':168 'would':287 'yield':14 'zfs':35 'zpool':109,160,208"
+cross: false
+upVotes: 0
+downVotes: 0
+ranking: 1703862554
+visibility: "visible "
+apId: "https://lemmy.world/post/10061829"
+editedAt: DateTimeImmutable @1708796667 {#1409
date: 2024-02-24 18:44:27.0 +01:00
}
+createdAt: DateTimeImmutable @1703776154 {#1586
date: 2023-12-28 16:09:14.0 +01:00
}
} |