1 |
DENIED
|
ROLE_USER
|
null |
|
Show voter details
|
2 |
DENIED
|
moderate
|
App\Entity\Entry {#2412
+user: Proxies\__CG__\App\Entity\User {#1970 …}
+magazine: App\Entity\Magazine {#266
+icon: Proxies\__CG__\App\Entity\Image {#247 …}
+name: "selfhosted@lemmy.world"
+title: "selfhosted"
+description: """
A place to share alternatives to popular online services that can be self-hosted without giving up privacy or locking you into a service you don’t control.\n
\n
Rules:\n
\n
- Be civil: we’re here to support and learn from one another. Insults won’t be tolerated. Flame wars are frowned upon.\n
- No spam posting.\n
- Don’t duplicate the full text of your blog or github here. Just post the link for folks to click.\n
- Submission headline should match the article title (don’t cherry-pick information from the title to fit your agenda).\n
- No trolling.\n
\n
Resources:\n
\n
- [awesome-selfhosted software](https://github.com/awesome-selfhosted/awesome-selfhosted)\n
- [awesome-sysadmin](https://github.com/awesome-foss/awesome-sysadmin) resources\n
- [Self-Hosted Podcast from Jupiter Broadcasting](https://selfhosted.show)\n
\n
> Any issues on the community? Report it using the report flag.\n
\n
> Questions? DM the mods!
"""
+rules: null
+subscriptionsCount: 1
+entryCount: 222
+entryCommentCount: 3916
+postCount: 0
+postCommentCount: 0
+isAdult: false
+customCss: null
+lastActive: DateTime @1729582735 {#276
date: 2024-10-22 09:38:55.0 +02:00
}
+markedForDeletionAt: null
+tags: null
+moderators: Doctrine\ORM\PersistentCollection {#238 …}
+ownershipRequests: Doctrine\ORM\PersistentCollection {#234 …}
+moderatorRequests: Doctrine\ORM\PersistentCollection {#223 …}
+entries: Doctrine\ORM\PersistentCollection {#181 …}
+posts: Doctrine\ORM\PersistentCollection {#139 …}
+subscriptions: Doctrine\ORM\PersistentCollection {#201 …}
+bans: Doctrine\ORM\PersistentCollection {#118 …}
+reports: Doctrine\ORM\PersistentCollection {#104 …}
+badges: Doctrine\ORM\PersistentCollection {#82 …}
+logs: Doctrine\ORM\PersistentCollection {#72 …}
+awards: Doctrine\ORM\PersistentCollection {#61 …}
+categories: Doctrine\ORM\PersistentCollection {#1820 …}
-id: 120
+apId: "selfhosted@lemmy.world"
+apProfileId: "https://lemmy.world/c/selfhosted"
+apPublicUrl: "https://lemmy.world/c/selfhosted"
+apFollowersUrl: "https://lemmy.world/c/selfhosted/followers"
+apInboxUrl: "https://lemmy.world/inbox"
+apDomain: "lemmy.world"
+apPreferredUsername: "selfhosted"
+apDiscoverable: true
+apManuallyApprovesFollowers: null
+privateKey: null
+publicKey: null
+apFetchedAt: DateTime @1703473826 {#270
date: 2023-12-25 04:10:26.0 +01:00
}
+apDeletedAt: null
+apTimeoutAt: null
+visibility: "visible "
+createdAt: DateTimeImmutable @1703473826 {#272
date: 2023-12-25 04:10:26.0 +01:00
}
}
+image: null
+domain: Proxies\__CG__\App\Entity\Domain {#1915 …}
+slug: "Question-ZFS-and-rsync"
+title: "Question - ZFS and rsync"
+url: null
+body: """
Hey fellow Selfhosters! I need some help, I think, and searching isn’t yielding what I’m hoping for.\n
\n
I recently built a new NAS for my network with 4x 18TB drives in a ZFS raidz1 pool. I previously have been using an external USB 12TB harddrive attached to a different machine.\n
\n
I’ve been attempting to use rsync to get the 12TB drive copied over to the new pool and things go great for the first 30-45 minutes. At that point, the current copy speed diminishes and 4 current files in progress sit at 100% done. Eventually, I’ve had to reboot the machine, because the zpool doesn’t appear accessible any longer. After reboot, the pool appears fine, no faults, and I can resume rsync for a while.\n
\n
EDIT: Of note, the rsync process seems to stall and I can’t get it to respect SIGINT or Ctrl+C. I can SSH in separately and running `zpool status` hangs with no output.\n
\n
While the workaround seems to be partially successful, the point of using rsync is to make it fairly hands-free and it’s been a week long process to copy the 3TB that I have now. I don’t think my zpool should be disappearing like that! Makes me nervous about the long-term viability. I don’t think I’m ready to drop down on Unraid.\n
\n
rsync is being initiated from the NAS to copy from the old server, am I better off “pushing” than “pulling”? I can’t imagine it’d make much difference.\n
\n
Could my drives be bad? How could I tell? They’re attached to a 10 port SATA card, could that be defective? How would I tell?\n
\n
Thanks for any help! I’ve dabbled in linux for a long time, but I’m far from proficient, so I don’t really know the intricacies of dmesg et al.
"""
+type: "article"
+lang: "en"
+isOc: false
+hasEmbed: false
+commentCount: 17
+favouriteCount: 36
+score: 0
+isAdult: false
+sticky: false
+lastActive: DateTime @1711558563 {#2410
date: 2024-03-27 17:56:03.0 +01:00
}
+ip: null
+adaAmount: 0
+tags: null
+mentions: null
+comments: Doctrine\ORM\PersistentCollection {#1884 …}
+votes: Doctrine\ORM\PersistentCollection {#1973 …}
+reports: Doctrine\ORM\PersistentCollection {#1959 …}
+favourites: Doctrine\ORM\PersistentCollection {#1927 …}
+notifications: Doctrine\ORM\PersistentCollection {#2442 …}
+badges: Doctrine\ORM\PersistentCollection {#2440 …}
+children: []
-id: 25403
-titleTs: "'question':1 'rsync':4 'zfs':2"
-bodyTs: "'-45':79 '10':278 '100':97 '12tb':46,63 '18tb':31 '30':78 '3tb':198 '4':90 '4x':30 'access':113 'al':320 'appear':112,120 'attach':48,275 'attempt':56 'bad':268 'better':250 'built':22 'c':152 'card':281 'copi':65,86,196,243 'could':264,270,282 'ctrl':151 'current':85,91 'd':260 'dabbl':296 'defect':285 'differ':51,263 'diminish':88 'disappear':211 'dmesg':318 'doesn':110 'done':98 'drive':32,64,266 'drop':231 'edit':132 'et':319 'eventu':99 'extern':44 'fair':183 'far':306 'fault':123 'fellow':2 'file':92 'fine':121 'first':77 'free':186 'get':61,145 'go':73 'great':74 'hand':185 'hands-fre':184 'hang':162 'harddriv':47 'help':7,293 'hey':1 'hope':18 'imagin':258 'initi':238 'intricaci':316 'isn':12 'know':314 'like':212 'linux':298 'long':193,220,301 'long-term':219 'longer':115 'm':17,228,305 'machin':52,106 'make':181,214,261 'minut':80 'much':262 'nas':25,241 'need':5 'nervous':216 'network':28 'new':24,69 'note':134 'old':246 'output':165 'partial':172 'point':83,175 'pool':37,70,119 'port':279 'previous':39 'process':137,194 'profici':308 'progress':94 'pull':254 'push':252 'raidz1':36 're':274 'readi':229 'realli':313 'reboot':104,117 'recent':21 'respect':148 'resum':127 'rsync':59,128,136,178,235 'run':159 'sata':280 'search':11 'seem':138,169 'selfhost':3 'separ':157 'server':247 'sigint':149 'sit':95 'speed':87 'ssh':155 'stall':140 'status':161 'success':173 'tell':272,289 'term':221 'thank':290 'thing':72 'think':9,206,226 'time':302 'unraid':234 'usb':45 'use':42,58,177 've':54,101,295 'viabil':222 'week':192 'workaround':168 'would':287 'yield':14 'zfs':35 'zpool':109,160,208"
+cross: false
+upVotes: 0
+downVotes: 0
+ranking: 1703862554
+visibility: "visible "
+apId: "https://lemmy.world/post/10061829"
+editedAt: DateTimeImmutable @1708796667 {#1850
date: 2024-02-24 18:44:27.0 +01:00
}
+createdAt: DateTimeImmutable @1703776154 {#2420
date: 2023-12-28 16:09:14.0 +01:00
}
} |
|
Show voter details
|
3 |
DENIED
|
edit
|
App\Entity\Entry {#2412
+user: Proxies\__CG__\App\Entity\User {#1970 …}
+magazine: App\Entity\Magazine {#266
+icon: Proxies\__CG__\App\Entity\Image {#247 …}
+name: "selfhosted@lemmy.world"
+title: "selfhosted"
+description: """
A place to share alternatives to popular online services that can be self-hosted without giving up privacy or locking you into a service you don’t control.\n
\n
Rules:\n
\n
- Be civil: we’re here to support and learn from one another. Insults won’t be tolerated. Flame wars are frowned upon.\n
- No spam posting.\n
- Don’t duplicate the full text of your blog or github here. Just post the link for folks to click.\n
- Submission headline should match the article title (don’t cherry-pick information from the title to fit your agenda).\n
- No trolling.\n
\n
Resources:\n
\n
- [awesome-selfhosted software](https://github.com/awesome-selfhosted/awesome-selfhosted)\n
- [awesome-sysadmin](https://github.com/awesome-foss/awesome-sysadmin) resources\n
- [Self-Hosted Podcast from Jupiter Broadcasting](https://selfhosted.show)\n
\n
> Any issues on the community? Report it using the report flag.\n
\n
> Questions? DM the mods!
"""
+rules: null
+subscriptionsCount: 1
+entryCount: 222
+entryCommentCount: 3916
+postCount: 0
+postCommentCount: 0
+isAdult: false
+customCss: null
+lastActive: DateTime @1729582735 {#276
date: 2024-10-22 09:38:55.0 +02:00
}
+markedForDeletionAt: null
+tags: null
+moderators: Doctrine\ORM\PersistentCollection {#238 …}
+ownershipRequests: Doctrine\ORM\PersistentCollection {#234 …}
+moderatorRequests: Doctrine\ORM\PersistentCollection {#223 …}
+entries: Doctrine\ORM\PersistentCollection {#181 …}
+posts: Doctrine\ORM\PersistentCollection {#139 …}
+subscriptions: Doctrine\ORM\PersistentCollection {#201 …}
+bans: Doctrine\ORM\PersistentCollection {#118 …}
+reports: Doctrine\ORM\PersistentCollection {#104 …}
+badges: Doctrine\ORM\PersistentCollection {#82 …}
+logs: Doctrine\ORM\PersistentCollection {#72 …}
+awards: Doctrine\ORM\PersistentCollection {#61 …}
+categories: Doctrine\ORM\PersistentCollection {#1820 …}
-id: 120
+apId: "selfhosted@lemmy.world"
+apProfileId: "https://lemmy.world/c/selfhosted"
+apPublicUrl: "https://lemmy.world/c/selfhosted"
+apFollowersUrl: "https://lemmy.world/c/selfhosted/followers"
+apInboxUrl: "https://lemmy.world/inbox"
+apDomain: "lemmy.world"
+apPreferredUsername: "selfhosted"
+apDiscoverable: true
+apManuallyApprovesFollowers: null
+privateKey: null
+publicKey: null
+apFetchedAt: DateTime @1703473826 {#270
date: 2023-12-25 04:10:26.0 +01:00
}
+apDeletedAt: null
+apTimeoutAt: null
+visibility: "visible "
+createdAt: DateTimeImmutable @1703473826 {#272
date: 2023-12-25 04:10:26.0 +01:00
}
}
+image: null
+domain: Proxies\__CG__\App\Entity\Domain {#1915 …}
+slug: "Question-ZFS-and-rsync"
+title: "Question - ZFS and rsync"
+url: null
+body: """
Hey fellow Selfhosters! I need some help, I think, and searching isn’t yielding what I’m hoping for.\n
\n
I recently built a new NAS for my network with 4x 18TB drives in a ZFS raidz1 pool. I previously have been using an external USB 12TB harddrive attached to a different machine.\n
\n
I’ve been attempting to use rsync to get the 12TB drive copied over to the new pool and things go great for the first 30-45 minutes. At that point, the current copy speed diminishes and 4 current files in progress sit at 100% done. Eventually, I’ve had to reboot the machine, because the zpool doesn’t appear accessible any longer. After reboot, the pool appears fine, no faults, and I can resume rsync for a while.\n
\n
EDIT: Of note, the rsync process seems to stall and I can’t get it to respect SIGINT or Ctrl+C. I can SSH in separately and running `zpool status` hangs with no output.\n
\n
While the workaround seems to be partially successful, the point of using rsync is to make it fairly hands-free and it’s been a week long process to copy the 3TB that I have now. I don’t think my zpool should be disappearing like that! Makes me nervous about the long-term viability. I don’t think I’m ready to drop down on Unraid.\n
\n
rsync is being initiated from the NAS to copy from the old server, am I better off “pushing” than “pulling”? I can’t imagine it’d make much difference.\n
\n
Could my drives be bad? How could I tell? They’re attached to a 10 port SATA card, could that be defective? How would I tell?\n
\n
Thanks for any help! I’ve dabbled in linux for a long time, but I’m far from proficient, so I don’t really know the intricacies of dmesg et al.
"""
+type: "article"
+lang: "en"
+isOc: false
+hasEmbed: false
+commentCount: 17
+favouriteCount: 36
+score: 0
+isAdult: false
+sticky: false
+lastActive: DateTime @1711558563 {#2410
date: 2024-03-27 17:56:03.0 +01:00
}
+ip: null
+adaAmount: 0
+tags: null
+mentions: null
+comments: Doctrine\ORM\PersistentCollection {#1884 …}
+votes: Doctrine\ORM\PersistentCollection {#1973 …}
+reports: Doctrine\ORM\PersistentCollection {#1959 …}
+favourites: Doctrine\ORM\PersistentCollection {#1927 …}
+notifications: Doctrine\ORM\PersistentCollection {#2442 …}
+badges: Doctrine\ORM\PersistentCollection {#2440 …}
+children: []
-id: 25403
-titleTs: "'question':1 'rsync':4 'zfs':2"
-bodyTs: "'-45':79 '10':278 '100':97 '12tb':46,63 '18tb':31 '30':78 '3tb':198 '4':90 '4x':30 'access':113 'al':320 'appear':112,120 'attach':48,275 'attempt':56 'bad':268 'better':250 'built':22 'c':152 'card':281 'copi':65,86,196,243 'could':264,270,282 'ctrl':151 'current':85,91 'd':260 'dabbl':296 'defect':285 'differ':51,263 'diminish':88 'disappear':211 'dmesg':318 'doesn':110 'done':98 'drive':32,64,266 'drop':231 'edit':132 'et':319 'eventu':99 'extern':44 'fair':183 'far':306 'fault':123 'fellow':2 'file':92 'fine':121 'first':77 'free':186 'get':61,145 'go':73 'great':74 'hand':185 'hands-fre':184 'hang':162 'harddriv':47 'help':7,293 'hey':1 'hope':18 'imagin':258 'initi':238 'intricaci':316 'isn':12 'know':314 'like':212 'linux':298 'long':193,220,301 'long-term':219 'longer':115 'm':17,228,305 'machin':52,106 'make':181,214,261 'minut':80 'much':262 'nas':25,241 'need':5 'nervous':216 'network':28 'new':24,69 'note':134 'old':246 'output':165 'partial':172 'point':83,175 'pool':37,70,119 'port':279 'previous':39 'process':137,194 'profici':308 'progress':94 'pull':254 'push':252 'raidz1':36 're':274 'readi':229 'realli':313 'reboot':104,117 'recent':21 'respect':148 'resum':127 'rsync':59,128,136,178,235 'run':159 'sata':280 'search':11 'seem':138,169 'selfhost':3 'separ':157 'server':247 'sigint':149 'sit':95 'speed':87 'ssh':155 'stall':140 'status':161 'success':173 'tell':272,289 'term':221 'thank':290 'thing':72 'think':9,206,226 'time':302 'unraid':234 'usb':45 'use':42,58,177 've':54,101,295 'viabil':222 'week':192 'workaround':168 'would':287 'yield':14 'zfs':35 'zpool':109,160,208"
+cross: false
+upVotes: 0
+downVotes: 0
+ranking: 1703862554
+visibility: "visible "
+apId: "https://lemmy.world/post/10061829"
+editedAt: DateTimeImmutable @1708796667 {#1850
date: 2024-02-24 18:44:27.0 +01:00
}
+createdAt: DateTimeImmutable @1703776154 {#2420
date: 2023-12-28 16:09:14.0 +01:00
}
} |
|
Show voter details
|
4 |
DENIED
|
moderate
|
App\Entity\Entry {#2412
+user: Proxies\__CG__\App\Entity\User {#1970 …}
+magazine: App\Entity\Magazine {#266
+icon: Proxies\__CG__\App\Entity\Image {#247 …}
+name: "selfhosted@lemmy.world"
+title: "selfhosted"
+description: """
A place to share alternatives to popular online services that can be self-hosted without giving up privacy or locking you into a service you don’t control.\n
\n
Rules:\n
\n
- Be civil: we’re here to support and learn from one another. Insults won’t be tolerated. Flame wars are frowned upon.\n
- No spam posting.\n
- Don’t duplicate the full text of your blog or github here. Just post the link for folks to click.\n
- Submission headline should match the article title (don’t cherry-pick information from the title to fit your agenda).\n
- No trolling.\n
\n
Resources:\n
\n
- [awesome-selfhosted software](https://github.com/awesome-selfhosted/awesome-selfhosted)\n
- [awesome-sysadmin](https://github.com/awesome-foss/awesome-sysadmin) resources\n
- [Self-Hosted Podcast from Jupiter Broadcasting](https://selfhosted.show)\n
\n
> Any issues on the community? Report it using the report flag.\n
\n
> Questions? DM the mods!
"""
+rules: null
+subscriptionsCount: 1
+entryCount: 222
+entryCommentCount: 3916
+postCount: 0
+postCommentCount: 0
+isAdult: false
+customCss: null
+lastActive: DateTime @1729582735 {#276
date: 2024-10-22 09:38:55.0 +02:00
}
+markedForDeletionAt: null
+tags: null
+moderators: Doctrine\ORM\PersistentCollection {#238 …}
+ownershipRequests: Doctrine\ORM\PersistentCollection {#234 …}
+moderatorRequests: Doctrine\ORM\PersistentCollection {#223 …}
+entries: Doctrine\ORM\PersistentCollection {#181 …}
+posts: Doctrine\ORM\PersistentCollection {#139 …}
+subscriptions: Doctrine\ORM\PersistentCollection {#201 …}
+bans: Doctrine\ORM\PersistentCollection {#118 …}
+reports: Doctrine\ORM\PersistentCollection {#104 …}
+badges: Doctrine\ORM\PersistentCollection {#82 …}
+logs: Doctrine\ORM\PersistentCollection {#72 …}
+awards: Doctrine\ORM\PersistentCollection {#61 …}
+categories: Doctrine\ORM\PersistentCollection {#1820 …}
-id: 120
+apId: "selfhosted@lemmy.world"
+apProfileId: "https://lemmy.world/c/selfhosted"
+apPublicUrl: "https://lemmy.world/c/selfhosted"
+apFollowersUrl: "https://lemmy.world/c/selfhosted/followers"
+apInboxUrl: "https://lemmy.world/inbox"
+apDomain: "lemmy.world"
+apPreferredUsername: "selfhosted"
+apDiscoverable: true
+apManuallyApprovesFollowers: null
+privateKey: null
+publicKey: null
+apFetchedAt: DateTime @1703473826 {#270
date: 2023-12-25 04:10:26.0 +01:00
}
+apDeletedAt: null
+apTimeoutAt: null
+visibility: "visible "
+createdAt: DateTimeImmutable @1703473826 {#272
date: 2023-12-25 04:10:26.0 +01:00
}
}
+image: null
+domain: Proxies\__CG__\App\Entity\Domain {#1915 …}
+slug: "Question-ZFS-and-rsync"
+title: "Question - ZFS and rsync"
+url: null
+body: """
Hey fellow Selfhosters! I need some help, I think, and searching isn’t yielding what I’m hoping for.\n
\n
I recently built a new NAS for my network with 4x 18TB drives in a ZFS raidz1 pool. I previously have been using an external USB 12TB harddrive attached to a different machine.\n
\n
I’ve been attempting to use rsync to get the 12TB drive copied over to the new pool and things go great for the first 30-45 minutes. At that point, the current copy speed diminishes and 4 current files in progress sit at 100% done. Eventually, I’ve had to reboot the machine, because the zpool doesn’t appear accessible any longer. After reboot, the pool appears fine, no faults, and I can resume rsync for a while.\n
\n
EDIT: Of note, the rsync process seems to stall and I can’t get it to respect SIGINT or Ctrl+C. I can SSH in separately and running `zpool status` hangs with no output.\n
\n
While the workaround seems to be partially successful, the point of using rsync is to make it fairly hands-free and it’s been a week long process to copy the 3TB that I have now. I don’t think my zpool should be disappearing like that! Makes me nervous about the long-term viability. I don’t think I’m ready to drop down on Unraid.\n
\n
rsync is being initiated from the NAS to copy from the old server, am I better off “pushing” than “pulling”? I can’t imagine it’d make much difference.\n
\n
Could my drives be bad? How could I tell? They’re attached to a 10 port SATA card, could that be defective? How would I tell?\n
\n
Thanks for any help! I’ve dabbled in linux for a long time, but I’m far from proficient, so I don’t really know the intricacies of dmesg et al.
"""
+type: "article"
+lang: "en"
+isOc: false
+hasEmbed: false
+commentCount: 17
+favouriteCount: 36
+score: 0
+isAdult: false
+sticky: false
+lastActive: DateTime @1711558563 {#2410
date: 2024-03-27 17:56:03.0 +01:00
}
+ip: null
+adaAmount: 0
+tags: null
+mentions: null
+comments: Doctrine\ORM\PersistentCollection {#1884 …}
+votes: Doctrine\ORM\PersistentCollection {#1973 …}
+reports: Doctrine\ORM\PersistentCollection {#1959 …}
+favourites: Doctrine\ORM\PersistentCollection {#1927 …}
+notifications: Doctrine\ORM\PersistentCollection {#2442 …}
+badges: Doctrine\ORM\PersistentCollection {#2440 …}
+children: []
-id: 25403
-titleTs: "'question':1 'rsync':4 'zfs':2"
-bodyTs: "'-45':79 '10':278 '100':97 '12tb':46,63 '18tb':31 '30':78 '3tb':198 '4':90 '4x':30 'access':113 'al':320 'appear':112,120 'attach':48,275 'attempt':56 'bad':268 'better':250 'built':22 'c':152 'card':281 'copi':65,86,196,243 'could':264,270,282 'ctrl':151 'current':85,91 'd':260 'dabbl':296 'defect':285 'differ':51,263 'diminish':88 'disappear':211 'dmesg':318 'doesn':110 'done':98 'drive':32,64,266 'drop':231 'edit':132 'et':319 'eventu':99 'extern':44 'fair':183 'far':306 'fault':123 'fellow':2 'file':92 'fine':121 'first':77 'free':186 'get':61,145 'go':73 'great':74 'hand':185 'hands-fre':184 'hang':162 'harddriv':47 'help':7,293 'hey':1 'hope':18 'imagin':258 'initi':238 'intricaci':316 'isn':12 'know':314 'like':212 'linux':298 'long':193,220,301 'long-term':219 'longer':115 'm':17,228,305 'machin':52,106 'make':181,214,261 'minut':80 'much':262 'nas':25,241 'need':5 'nervous':216 'network':28 'new':24,69 'note':134 'old':246 'output':165 'partial':172 'point':83,175 'pool':37,70,119 'port':279 'previous':39 'process':137,194 'profici':308 'progress':94 'pull':254 'push':252 'raidz1':36 're':274 'readi':229 'realli':313 'reboot':104,117 'recent':21 'respect':148 'resum':127 'rsync':59,128,136,178,235 'run':159 'sata':280 'search':11 'seem':138,169 'selfhost':3 'separ':157 'server':247 'sigint':149 'sit':95 'speed':87 'ssh':155 'stall':140 'status':161 'success':173 'tell':272,289 'term':221 'thank':290 'thing':72 'think':9,206,226 'time':302 'unraid':234 'usb':45 'use':42,58,177 've':54,101,295 'viabil':222 'week':192 'workaround':168 'would':287 'yield':14 'zfs':35 'zpool':109,160,208"
+cross: false
+upVotes: 0
+downVotes: 0
+ranking: 1703862554
+visibility: "visible "
+apId: "https://lemmy.world/post/10061829"
+editedAt: DateTimeImmutable @1708796667 {#1850
date: 2024-02-24 18:44:27.0 +01:00
}
+createdAt: DateTimeImmutable @1703776154 {#2420
date: 2023-12-28 16:09:14.0 +01:00
}
} |
|
Show voter details
|
5 |
DENIED
|
ROLE_USER
|
null |
|
Show voter details
|
6 |
DENIED
|
moderate
|
App\Entity\EntryComment {#4048
+user: App\Entity\User {#3997 …}
+entry: App\Entity\Entry {#2412
+user: Proxies\__CG__\App\Entity\User {#1970 …}
+magazine: App\Entity\Magazine {#266
+icon: Proxies\__CG__\App\Entity\Image {#247 …}
+name: "selfhosted@lemmy.world"
+title: "selfhosted"
+description: """
A place to share alternatives to popular online services that can be self-hosted without giving up privacy or locking you into a service you don’t control.\n
\n
Rules:\n
\n
- Be civil: we’re here to support and learn from one another. Insults won’t be tolerated. Flame wars are frowned upon.\n
- No spam posting.\n
- Don’t duplicate the full text of your blog or github here. Just post the link for folks to click.\n
- Submission headline should match the article title (don’t cherry-pick information from the title to fit your agenda).\n
- No trolling.\n
\n
Resources:\n
\n
- [awesome-selfhosted software](https://github.com/awesome-selfhosted/awesome-selfhosted)\n
- [awesome-sysadmin](https://github.com/awesome-foss/awesome-sysadmin) resources\n
- [Self-Hosted Podcast from Jupiter Broadcasting](https://selfhosted.show)\n
\n
> Any issues on the community? Report it using the report flag.\n
\n
> Questions? DM the mods!
"""
+rules: null
+subscriptionsCount: 1
+entryCount: 222
+entryCommentCount: 3916
+postCount: 0
+postCommentCount: 0
+isAdult: false
+customCss: null
+lastActive: DateTime @1729582735 {#276
date: 2024-10-22 09:38:55.0 +02:00
}
+markedForDeletionAt: null
+tags: null
+moderators: Doctrine\ORM\PersistentCollection {#238 …}
+ownershipRequests: Doctrine\ORM\PersistentCollection {#234 …}
+moderatorRequests: Doctrine\ORM\PersistentCollection {#223 …}
+entries: Doctrine\ORM\PersistentCollection {#181 …}
+posts: Doctrine\ORM\PersistentCollection {#139 …}
+subscriptions: Doctrine\ORM\PersistentCollection {#201 …}
+bans: Doctrine\ORM\PersistentCollection {#118 …}
+reports: Doctrine\ORM\PersistentCollection {#104 …}
+badges: Doctrine\ORM\PersistentCollection {#82 …}
+logs: Doctrine\ORM\PersistentCollection {#72 …}
+awards: Doctrine\ORM\PersistentCollection {#61 …}
+categories: Doctrine\ORM\PersistentCollection {#1820 …}
-id: 120
+apId: "selfhosted@lemmy.world"
+apProfileId: "https://lemmy.world/c/selfhosted"
+apPublicUrl: "https://lemmy.world/c/selfhosted"
+apFollowersUrl: "https://lemmy.world/c/selfhosted/followers"
+apInboxUrl: "https://lemmy.world/inbox"
+apDomain: "lemmy.world"
+apPreferredUsername: "selfhosted"
+apDiscoverable: true
+apManuallyApprovesFollowers: null
+privateKey: null
+publicKey: null
+apFetchedAt: DateTime @1703473826 {#270
date: 2023-12-25 04:10:26.0 +01:00
}
+apDeletedAt: null
+apTimeoutAt: null
+visibility: "visible "
+createdAt: DateTimeImmutable @1703473826 {#272
date: 2023-12-25 04:10:26.0 +01:00
}
}
+image: null
+domain: Proxies\__CG__\App\Entity\Domain {#1915 …}
+slug: "Question-ZFS-and-rsync"
+title: "Question - ZFS and rsync"
+url: null
+body: """
Hey fellow Selfhosters! I need some help, I think, and searching isn’t yielding what I’m hoping for.\n
\n
I recently built a new NAS for my network with 4x 18TB drives in a ZFS raidz1 pool. I previously have been using an external USB 12TB harddrive attached to a different machine.\n
\n
I’ve been attempting to use rsync to get the 12TB drive copied over to the new pool and things go great for the first 30-45 minutes. At that point, the current copy speed diminishes and 4 current files in progress sit at 100% done. Eventually, I’ve had to reboot the machine, because the zpool doesn’t appear accessible any longer. After reboot, the pool appears fine, no faults, and I can resume rsync for a while.\n
\n
EDIT: Of note, the rsync process seems to stall and I can’t get it to respect SIGINT or Ctrl+C. I can SSH in separately and running `zpool status` hangs with no output.\n
\n
While the workaround seems to be partially successful, the point of using rsync is to make it fairly hands-free and it’s been a week long process to copy the 3TB that I have now. I don’t think my zpool should be disappearing like that! Makes me nervous about the long-term viability. I don’t think I’m ready to drop down on Unraid.\n
\n
rsync is being initiated from the NAS to copy from the old server, am I better off “pushing” than “pulling”? I can’t imagine it’d make much difference.\n
\n
Could my drives be bad? How could I tell? They’re attached to a 10 port SATA card, could that be defective? How would I tell?\n
\n
Thanks for any help! I’ve dabbled in linux for a long time, but I’m far from proficient, so I don’t really know the intricacies of dmesg et al.
"""
+type: "article"
+lang: "en"
+isOc: false
+hasEmbed: false
+commentCount: 17
+favouriteCount: 36
+score: 0
+isAdult: false
+sticky: false
+lastActive: DateTime @1711558563 {#2410
date: 2024-03-27 17:56:03.0 +01:00
}
+ip: null
+adaAmount: 0
+tags: null
+mentions: null
+comments: Doctrine\ORM\PersistentCollection {#1884 …}
+votes: Doctrine\ORM\PersistentCollection {#1973 …}
+reports: Doctrine\ORM\PersistentCollection {#1959 …}
+favourites: Doctrine\ORM\PersistentCollection {#1927 …}
+notifications: Doctrine\ORM\PersistentCollection {#2442 …}
+badges: Doctrine\ORM\PersistentCollection {#2440 …}
+children: []
-id: 25403
-titleTs: "'question':1 'rsync':4 'zfs':2"
-bodyTs: "'-45':79 '10':278 '100':97 '12tb':46,63 '18tb':31 '30':78 '3tb':198 '4':90 '4x':30 'access':113 'al':320 'appear':112,120 'attach':48,275 'attempt':56 'bad':268 'better':250 'built':22 'c':152 'card':281 'copi':65,86,196,243 'could':264,270,282 'ctrl':151 'current':85,91 'd':260 'dabbl':296 'defect':285 'differ':51,263 'diminish':88 'disappear':211 'dmesg':318 'doesn':110 'done':98 'drive':32,64,266 'drop':231 'edit':132 'et':319 'eventu':99 'extern':44 'fair':183 'far':306 'fault':123 'fellow':2 'file':92 'fine':121 'first':77 'free':186 'get':61,145 'go':73 'great':74 'hand':185 'hands-fre':184 'hang':162 'harddriv':47 'help':7,293 'hey':1 'hope':18 'imagin':258 'initi':238 'intricaci':316 'isn':12 'know':314 'like':212 'linux':298 'long':193,220,301 'long-term':219 'longer':115 'm':17,228,305 'machin':52,106 'make':181,214,261 'minut':80 'much':262 'nas':25,241 'need':5 'nervous':216 'network':28 'new':24,69 'note':134 'old':246 'output':165 'partial':172 'point':83,175 'pool':37,70,119 'port':279 'previous':39 'process':137,194 'profici':308 'progress':94 'pull':254 'push':252 'raidz1':36 're':274 'readi':229 'realli':313 'reboot':104,117 'recent':21 'respect':148 'resum':127 'rsync':59,128,136,178,235 'run':159 'sata':280 'search':11 'seem':138,169 'selfhost':3 'separ':157 'server':247 'sigint':149 'sit':95 'speed':87 'ssh':155 'stall':140 'status':161 'success':173 'tell':272,289 'term':221 'thank':290 'thing':72 'think':9,206,226 'time':302 'unraid':234 'usb':45 'use':42,58,177 've':54,101,295 'viabil':222 'week':192 'workaround':168 'would':287 'yield':14 'zfs':35 'zpool':109,160,208"
+cross: false
+upVotes: 0
+downVotes: 0
+ranking: 1703862554
+visibility: "visible "
+apId: "https://lemmy.world/post/10061829"
+editedAt: DateTimeImmutable @1708796667 {#1850
date: 2024-02-24 18:44:27.0 +01:00
}
+createdAt: DateTimeImmutable @1703776154 {#2420
date: 2023-12-28 16:09:14.0 +01:00
}
}
+magazine: App\Entity\Magazine {#266}
+image: null
+parent: null
+root: null
+body: """
Just to make sure. Are you copying to your ZFS pool directory or a dataset? Check to male sure your paths are correct.\n
\n
Push vs pull shouldn’t matter but I’ve always done push.\n
\n
If your zpool is not accessible anymore after a transfer then there is a low-level problem here as it shouldn’t just disappear.\n
\n
I would installe tmux on your ZFS system and have a window with htop running, dmesg, and zpool status running to check your system while you copy files. Something that severe should become self evedent pretty quickly.
"""
+lang: "en"
+isAdult: false
+favouriteCount: 7
+score: 0
+lastActive: DateTime @1703804368 {#4057
date: 2023-12-28 23:59:28.0 +01:00
}
+ip: null
+tags: null
+mentions: [
"@isles@lemmy.world"
]
+children: Doctrine\ORM\PersistentCollection {#4046 …}
+nested: Doctrine\ORM\PersistentCollection {#4044 …}
+votes: Doctrine\ORM\PersistentCollection {#4042 …}
+reports: Doctrine\ORM\PersistentCollection {#4040 …}
+favourites: Doctrine\ORM\PersistentCollection {#4008 …}
+notifications: Doctrine\ORM\PersistentCollection {#4012 …}
-id: 260716
-bodyTs: "'access':41 'alway':33 'anymor':42 'becom':93 'check':16,82 'copi':7,87 'correct':23 'dataset':15 'directori':12 'disappear':60 'dmesg':76 'done':34 'eved':95 'file':88 'htop':74 'install':63 'level':52 'low':51 'low-level':50 'make':3 'male':18 'matter':29 'path':21 'pool':11 'pretti':96 'problem':53 'pull':26 'push':24,35 'quick':97 'run':75,80 'self':94 'sever':91 'shouldn':27,57 'someth':89 'status':79 'sure':4,19 'system':68,84 'tmux':64 'transfer':45 've':32 'vs':25 'window':72 'would':62 'zfs':10,67 'zpool':38,78"
+ranking: 0
+commentCount: 0
+upVotes: 0
+downVotes: 0
+visibility: "visible "
+apId: "https://lemmy.world/comment/6283787"
+editedAt: null
+createdAt: DateTimeImmutable @1703804368 {#3989
date: 2023-12-28 23:59:28.0 +01:00
}
} |
|
Show voter details
|
7 |
DENIED
|
edit
|
App\Entity\EntryComment {#4048
+user: App\Entity\User {#3997 …}
+entry: App\Entity\Entry {#2412
+user: Proxies\__CG__\App\Entity\User {#1970 …}
+magazine: App\Entity\Magazine {#266
+icon: Proxies\__CG__\App\Entity\Image {#247 …}
+name: "selfhosted@lemmy.world"
+title: "selfhosted"
+description: """
A place to share alternatives to popular online services that can be self-hosted without giving up privacy or locking you into a service you don’t control.\n
\n
Rules:\n
\n
- Be civil: we’re here to support and learn from one another. Insults won’t be tolerated. Flame wars are frowned upon.\n
- No spam posting.\n
- Don’t duplicate the full text of your blog or github here. Just post the link for folks to click.\n
- Submission headline should match the article title (don’t cherry-pick information from the title to fit your agenda).\n
- No trolling.\n
\n
Resources:\n
\n
- [awesome-selfhosted software](https://github.com/awesome-selfhosted/awesome-selfhosted)\n
- [awesome-sysadmin](https://github.com/awesome-foss/awesome-sysadmin) resources\n
- [Self-Hosted Podcast from Jupiter Broadcasting](https://selfhosted.show)\n
\n
> Any issues on the community? Report it using the report flag.\n
\n
> Questions? DM the mods!
"""
+rules: null
+subscriptionsCount: 1
+entryCount: 222
+entryCommentCount: 3916
+postCount: 0
+postCommentCount: 0
+isAdult: false
+customCss: null
+lastActive: DateTime @1729582735 {#276
date: 2024-10-22 09:38:55.0 +02:00
}
+markedForDeletionAt: null
+tags: null
+moderators: Doctrine\ORM\PersistentCollection {#238 …}
+ownershipRequests: Doctrine\ORM\PersistentCollection {#234 …}
+moderatorRequests: Doctrine\ORM\PersistentCollection {#223 …}
+entries: Doctrine\ORM\PersistentCollection {#181 …}
+posts: Doctrine\ORM\PersistentCollection {#139 …}
+subscriptions: Doctrine\ORM\PersistentCollection {#201 …}
+bans: Doctrine\ORM\PersistentCollection {#118 …}
+reports: Doctrine\ORM\PersistentCollection {#104 …}
+badges: Doctrine\ORM\PersistentCollection {#82 …}
+logs: Doctrine\ORM\PersistentCollection {#72 …}
+awards: Doctrine\ORM\PersistentCollection {#61 …}
+categories: Doctrine\ORM\PersistentCollection {#1820 …}
-id: 120
+apId: "selfhosted@lemmy.world"
+apProfileId: "https://lemmy.world/c/selfhosted"
+apPublicUrl: "https://lemmy.world/c/selfhosted"
+apFollowersUrl: "https://lemmy.world/c/selfhosted/followers"
+apInboxUrl: "https://lemmy.world/inbox"
+apDomain: "lemmy.world"
+apPreferredUsername: "selfhosted"
+apDiscoverable: true
+apManuallyApprovesFollowers: null
+privateKey: null
+publicKey: null
+apFetchedAt: DateTime @1703473826 {#270
date: 2023-12-25 04:10:26.0 +01:00
}
+apDeletedAt: null
+apTimeoutAt: null
+visibility: "visible "
+createdAt: DateTimeImmutable @1703473826 {#272
date: 2023-12-25 04:10:26.0 +01:00
}
}
+image: null
+domain: Proxies\__CG__\App\Entity\Domain {#1915 …}
+slug: "Question-ZFS-and-rsync"
+title: "Question - ZFS and rsync"
+url: null
+body: """
Hey fellow Selfhosters! I need some help, I think, and searching isn’t yielding what I’m hoping for.\n
\n
I recently built a new NAS for my network with 4x 18TB drives in a ZFS raidz1 pool. I previously have been using an external USB 12TB harddrive attached to a different machine.\n
\n
I’ve been attempting to use rsync to get the 12TB drive copied over to the new pool and things go great for the first 30-45 minutes. At that point, the current copy speed diminishes and 4 current files in progress sit at 100% done. Eventually, I’ve had to reboot the machine, because the zpool doesn’t appear accessible any longer. After reboot, the pool appears fine, no faults, and I can resume rsync for a while.\n
\n
EDIT: Of note, the rsync process seems to stall and I can’t get it to respect SIGINT or Ctrl+C. I can SSH in separately and running `zpool status` hangs with no output.\n
\n
While the workaround seems to be partially successful, the point of using rsync is to make it fairly hands-free and it’s been a week long process to copy the 3TB that I have now. I don’t think my zpool should be disappearing like that! Makes me nervous about the long-term viability. I don’t think I’m ready to drop down on Unraid.\n
\n
rsync is being initiated from the NAS to copy from the old server, am I better off “pushing” than “pulling”? I can’t imagine it’d make much difference.\n
\n
Could my drives be bad? How could I tell? They’re attached to a 10 port SATA card, could that be defective? How would I tell?\n
\n
Thanks for any help! I’ve dabbled in linux for a long time, but I’m far from proficient, so I don’t really know the intricacies of dmesg et al.
"""
+type: "article"
+lang: "en"
+isOc: false
+hasEmbed: false
+commentCount: 17
+favouriteCount: 36
+score: 0
+isAdult: false
+sticky: false
+lastActive: DateTime @1711558563 {#2410
date: 2024-03-27 17:56:03.0 +01:00
}
+ip: null
+adaAmount: 0
+tags: null
+mentions: null
+comments: Doctrine\ORM\PersistentCollection {#1884 …}
+votes: Doctrine\ORM\PersistentCollection {#1973 …}
+reports: Doctrine\ORM\PersistentCollection {#1959 …}
+favourites: Doctrine\ORM\PersistentCollection {#1927 …}
+notifications: Doctrine\ORM\PersistentCollection {#2442 …}
+badges: Doctrine\ORM\PersistentCollection {#2440 …}
+children: []
-id: 25403
-titleTs: "'question':1 'rsync':4 'zfs':2"
-bodyTs: "'-45':79 '10':278 '100':97 '12tb':46,63 '18tb':31 '30':78 '3tb':198 '4':90 '4x':30 'access':113 'al':320 'appear':112,120 'attach':48,275 'attempt':56 'bad':268 'better':250 'built':22 'c':152 'card':281 'copi':65,86,196,243 'could':264,270,282 'ctrl':151 'current':85,91 'd':260 'dabbl':296 'defect':285 'differ':51,263 'diminish':88 'disappear':211 'dmesg':318 'doesn':110 'done':98 'drive':32,64,266 'drop':231 'edit':132 'et':319 'eventu':99 'extern':44 'fair':183 'far':306 'fault':123 'fellow':2 'file':92 'fine':121 'first':77 'free':186 'get':61,145 'go':73 'great':74 'hand':185 'hands-fre':184 'hang':162 'harddriv':47 'help':7,293 'hey':1 'hope':18 'imagin':258 'initi':238 'intricaci':316 'isn':12 'know':314 'like':212 'linux':298 'long':193,220,301 'long-term':219 'longer':115 'm':17,228,305 'machin':52,106 'make':181,214,261 'minut':80 'much':262 'nas':25,241 'need':5 'nervous':216 'network':28 'new':24,69 'note':134 'old':246 'output':165 'partial':172 'point':83,175 'pool':37,70,119 'port':279 'previous':39 'process':137,194 'profici':308 'progress':94 'pull':254 'push':252 'raidz1':36 're':274 'readi':229 'realli':313 'reboot':104,117 'recent':21 'respect':148 'resum':127 'rsync':59,128,136,178,235 'run':159 'sata':280 'search':11 'seem':138,169 'selfhost':3 'separ':157 'server':247 'sigint':149 'sit':95 'speed':87 'ssh':155 'stall':140 'status':161 'success':173 'tell':272,289 'term':221 'thank':290 'thing':72 'think':9,206,226 'time':302 'unraid':234 'usb':45 'use':42,58,177 've':54,101,295 'viabil':222 'week':192 'workaround':168 'would':287 'yield':14 'zfs':35 'zpool':109,160,208"
+cross: false
+upVotes: 0
+downVotes: 0
+ranking: 1703862554
+visibility: "visible "
+apId: "https://lemmy.world/post/10061829"
+editedAt: DateTimeImmutable @1708796667 {#1850
date: 2024-02-24 18:44:27.0 +01:00
}
+createdAt: DateTimeImmutable @1703776154 {#2420
date: 2023-12-28 16:09:14.0 +01:00
}
}
+magazine: App\Entity\Magazine {#266}
+image: null
+parent: null
+root: null
+body: """
Just to make sure. Are you copying to your ZFS pool directory or a dataset? Check to male sure your paths are correct.\n
\n
Push vs pull shouldn’t matter but I’ve always done push.\n
\n
If your zpool is not accessible anymore after a transfer then there is a low-level problem here as it shouldn’t just disappear.\n
\n
I would installe tmux on your ZFS system and have a window with htop running, dmesg, and zpool status running to check your system while you copy files. Something that severe should become self evedent pretty quickly.
"""
+lang: "en"
+isAdult: false
+favouriteCount: 7
+score: 0
+lastActive: DateTime @1703804368 {#4057
date: 2023-12-28 23:59:28.0 +01:00
}
+ip: null
+tags: null
+mentions: [
"@isles@lemmy.world"
]
+children: Doctrine\ORM\PersistentCollection {#4046 …}
+nested: Doctrine\ORM\PersistentCollection {#4044 …}
+votes: Doctrine\ORM\PersistentCollection {#4042 …}
+reports: Doctrine\ORM\PersistentCollection {#4040 …}
+favourites: Doctrine\ORM\PersistentCollection {#4008 …}
+notifications: Doctrine\ORM\PersistentCollection {#4012 …}
-id: 260716
-bodyTs: "'access':41 'alway':33 'anymor':42 'becom':93 'check':16,82 'copi':7,87 'correct':23 'dataset':15 'directori':12 'disappear':60 'dmesg':76 'done':34 'eved':95 'file':88 'htop':74 'install':63 'level':52 'low':51 'low-level':50 'make':3 'male':18 'matter':29 'path':21 'pool':11 'pretti':96 'problem':53 'pull':26 'push':24,35 'quick':97 'run':75,80 'self':94 'sever':91 'shouldn':27,57 'someth':89 'status':79 'sure':4,19 'system':68,84 'tmux':64 'transfer':45 've':32 'vs':25 'window':72 'would':62 'zfs':10,67 'zpool':38,78"
+ranking: 0
+commentCount: 0
+upVotes: 0
+downVotes: 0
+visibility: "visible "
+apId: "https://lemmy.world/comment/6283787"
+editedAt: null
+createdAt: DateTimeImmutable @1703804368 {#3989
date: 2023-12-28 23:59:28.0 +01:00
}
} |
|
Show voter details
|
8 |
DENIED
|
moderate
|
App\Entity\EntryComment {#4048
+user: App\Entity\User {#3997 …}
+entry: App\Entity\Entry {#2412
+user: Proxies\__CG__\App\Entity\User {#1970 …}
+magazine: App\Entity\Magazine {#266
+icon: Proxies\__CG__\App\Entity\Image {#247 …}
+name: "selfhosted@lemmy.world"
+title: "selfhosted"
+description: """
A place to share alternatives to popular online services that can be self-hosted without giving up privacy or locking you into a service you don’t control.\n
\n
Rules:\n
\n
- Be civil: we’re here to support and learn from one another. Insults won’t be tolerated. Flame wars are frowned upon.\n
- No spam posting.\n
- Don’t duplicate the full text of your blog or github here. Just post the link for folks to click.\n
- Submission headline should match the article title (don’t cherry-pick information from the title to fit your agenda).\n
- No trolling.\n
\n
Resources:\n
\n
- [awesome-selfhosted software](https://github.com/awesome-selfhosted/awesome-selfhosted)\n
- [awesome-sysadmin](https://github.com/awesome-foss/awesome-sysadmin) resources\n
- [Self-Hosted Podcast from Jupiter Broadcasting](https://selfhosted.show)\n
\n
> Any issues on the community? Report it using the report flag.\n
\n
> Questions? DM the mods!
"""
+rules: null
+subscriptionsCount: 1
+entryCount: 222
+entryCommentCount: 3916
+postCount: 0
+postCommentCount: 0
+isAdult: false
+customCss: null
+lastActive: DateTime @1729582735 {#276
date: 2024-10-22 09:38:55.0 +02:00
}
+markedForDeletionAt: null
+tags: null
+moderators: Doctrine\ORM\PersistentCollection {#238 …}
+ownershipRequests: Doctrine\ORM\PersistentCollection {#234 …}
+moderatorRequests: Doctrine\ORM\PersistentCollection {#223 …}
+entries: Doctrine\ORM\PersistentCollection {#181 …}
+posts: Doctrine\ORM\PersistentCollection {#139 …}
+subscriptions: Doctrine\ORM\PersistentCollection {#201 …}
+bans: Doctrine\ORM\PersistentCollection {#118 …}
+reports: Doctrine\ORM\PersistentCollection {#104 …}
+badges: Doctrine\ORM\PersistentCollection {#82 …}
+logs: Doctrine\ORM\PersistentCollection {#72 …}
+awards: Doctrine\ORM\PersistentCollection {#61 …}
+categories: Doctrine\ORM\PersistentCollection {#1820 …}
-id: 120
+apId: "selfhosted@lemmy.world"
+apProfileId: "https://lemmy.world/c/selfhosted"
+apPublicUrl: "https://lemmy.world/c/selfhosted"
+apFollowersUrl: "https://lemmy.world/c/selfhosted/followers"
+apInboxUrl: "https://lemmy.world/inbox"
+apDomain: "lemmy.world"
+apPreferredUsername: "selfhosted"
+apDiscoverable: true
+apManuallyApprovesFollowers: null
+privateKey: null
+publicKey: null
+apFetchedAt: DateTime @1703473826 {#270
date: 2023-12-25 04:10:26.0 +01:00
}
+apDeletedAt: null
+apTimeoutAt: null
+visibility: "visible "
+createdAt: DateTimeImmutable @1703473826 {#272
date: 2023-12-25 04:10:26.0 +01:00
}
}
+image: null
+domain: Proxies\__CG__\App\Entity\Domain {#1915 …}
+slug: "Question-ZFS-and-rsync"
+title: "Question - ZFS and rsync"
+url: null
+body: """
Hey fellow Selfhosters! I need some help, I think, and searching isn’t yielding what I’m hoping for.\n
\n
I recently built a new NAS for my network with 4x 18TB drives in a ZFS raidz1 pool. I previously have been using an external USB 12TB harddrive attached to a different machine.\n
\n
I’ve been attempting to use rsync to get the 12TB drive copied over to the new pool and things go great for the first 30-45 minutes. At that point, the current copy speed diminishes and 4 current files in progress sit at 100% done. Eventually, I’ve had to reboot the machine, because the zpool doesn’t appear accessible any longer. After reboot, the pool appears fine, no faults, and I can resume rsync for a while.\n
\n
EDIT: Of note, the rsync process seems to stall and I can’t get it to respect SIGINT or Ctrl+C. I can SSH in separately and running `zpool status` hangs with no output.\n
\n
While the workaround seems to be partially successful, the point of using rsync is to make it fairly hands-free and it’s been a week long process to copy the 3TB that I have now. I don’t think my zpool should be disappearing like that! Makes me nervous about the long-term viability. I don’t think I’m ready to drop down on Unraid.\n
\n
rsync is being initiated from the NAS to copy from the old server, am I better off “pushing” than “pulling”? I can’t imagine it’d make much difference.\n
\n
Could my drives be bad? How could I tell? They’re attached to a 10 port SATA card, could that be defective? How would I tell?\n
\n
Thanks for any help! I’ve dabbled in linux for a long time, but I’m far from proficient, so I don’t really know the intricacies of dmesg et al.
"""
+type: "article"
+lang: "en"
+isOc: false
+hasEmbed: false
+commentCount: 17
+favouriteCount: 36
+score: 0
+isAdult: false
+sticky: false
+lastActive: DateTime @1711558563 {#2410
date: 2024-03-27 17:56:03.0 +01:00
}
+ip: null
+adaAmount: 0
+tags: null
+mentions: null
+comments: Doctrine\ORM\PersistentCollection {#1884 …}
+votes: Doctrine\ORM\PersistentCollection {#1973 …}
+reports: Doctrine\ORM\PersistentCollection {#1959 …}
+favourites: Doctrine\ORM\PersistentCollection {#1927 …}
+notifications: Doctrine\ORM\PersistentCollection {#2442 …}
+badges: Doctrine\ORM\PersistentCollection {#2440 …}
+children: []
-id: 25403
-titleTs: "'question':1 'rsync':4 'zfs':2"
-bodyTs: "'-45':79 '10':278 '100':97 '12tb':46,63 '18tb':31 '30':78 '3tb':198 '4':90 '4x':30 'access':113 'al':320 'appear':112,120 'attach':48,275 'attempt':56 'bad':268 'better':250 'built':22 'c':152 'card':281 'copi':65,86,196,243 'could':264,270,282 'ctrl':151 'current':85,91 'd':260 'dabbl':296 'defect':285 'differ':51,263 'diminish':88 'disappear':211 'dmesg':318 'doesn':110 'done':98 'drive':32,64,266 'drop':231 'edit':132 'et':319 'eventu':99 'extern':44 'fair':183 'far':306 'fault':123 'fellow':2 'file':92 'fine':121 'first':77 'free':186 'get':61,145 'go':73 'great':74 'hand':185 'hands-fre':184 'hang':162 'harddriv':47 'help':7,293 'hey':1 'hope':18 'imagin':258 'initi':238 'intricaci':316 'isn':12 'know':314 'like':212 'linux':298 'long':193,220,301 'long-term':219 'longer':115 'm':17,228,305 'machin':52,106 'make':181,214,261 'minut':80 'much':262 'nas':25,241 'need':5 'nervous':216 'network':28 'new':24,69 'note':134 'old':246 'output':165 'partial':172 'point':83,175 'pool':37,70,119 'port':279 'previous':39 'process':137,194 'profici':308 'progress':94 'pull':254 'push':252 'raidz1':36 're':274 'readi':229 'realli':313 'reboot':104,117 'recent':21 'respect':148 'resum':127 'rsync':59,128,136,178,235 'run':159 'sata':280 'search':11 'seem':138,169 'selfhost':3 'separ':157 'server':247 'sigint':149 'sit':95 'speed':87 'ssh':155 'stall':140 'status':161 'success':173 'tell':272,289 'term':221 'thank':290 'thing':72 'think':9,206,226 'time':302 'unraid':234 'usb':45 'use':42,58,177 've':54,101,295 'viabil':222 'week':192 'workaround':168 'would':287 'yield':14 'zfs':35 'zpool':109,160,208"
+cross: false
+upVotes: 0
+downVotes: 0
+ranking: 1703862554
+visibility: "visible "
+apId: "https://lemmy.world/post/10061829"
+editedAt: DateTimeImmutable @1708796667 {#1850
date: 2024-02-24 18:44:27.0 +01:00
}
+createdAt: DateTimeImmutable @1703776154 {#2420
date: 2023-12-28 16:09:14.0 +01:00
}
}
+magazine: App\Entity\Magazine {#266}
+image: null
+parent: null
+root: null
+body: """
Just to make sure. Are you copying to your ZFS pool directory or a dataset? Check to male sure your paths are correct.\n
\n
Push vs pull shouldn’t matter but I’ve always done push.\n
\n
If your zpool is not accessible anymore after a transfer then there is a low-level problem here as it shouldn’t just disappear.\n
\n
I would installe tmux on your ZFS system and have a window with htop running, dmesg, and zpool status running to check your system while you copy files. Something that severe should become self evedent pretty quickly.
"""
+lang: "en"
+isAdult: false
+favouriteCount: 7
+score: 0
+lastActive: DateTime @1703804368 {#4057
date: 2023-12-28 23:59:28.0 +01:00
}
+ip: null
+tags: null
+mentions: [
"@isles@lemmy.world"
]
+children: Doctrine\ORM\PersistentCollection {#4046 …}
+nested: Doctrine\ORM\PersistentCollection {#4044 …}
+votes: Doctrine\ORM\PersistentCollection {#4042 …}
+reports: Doctrine\ORM\PersistentCollection {#4040 …}
+favourites: Doctrine\ORM\PersistentCollection {#4008 …}
+notifications: Doctrine\ORM\PersistentCollection {#4012 …}
-id: 260716
-bodyTs: "'access':41 'alway':33 'anymor':42 'becom':93 'check':16,82 'copi':7,87 'correct':23 'dataset':15 'directori':12 'disappear':60 'dmesg':76 'done':34 'eved':95 'file':88 'htop':74 'install':63 'level':52 'low':51 'low-level':50 'make':3 'male':18 'matter':29 'path':21 'pool':11 'pretti':96 'problem':53 'pull':26 'push':24,35 'quick':97 'run':75,80 'self':94 'sever':91 'shouldn':27,57 'someth':89 'status':79 'sure':4,19 'system':68,84 'tmux':64 'transfer':45 've':32 'vs':25 'window':72 'would':62 'zfs':10,67 'zpool':38,78"
+ranking: 0
+commentCount: 0
+upVotes: 0
+downVotes: 0
+visibility: "visible "
+apId: "https://lemmy.world/comment/6283787"
+editedAt: null
+createdAt: DateTimeImmutable @1703804368 {#3989
date: 2023-12-28 23:59:28.0 +01:00
}
} |
|
Show voter details
|
9 |
DENIED
|
ROLE_USER
|
null |
|
Show voter details
|
10 |
DENIED
|
moderate
|
App\Entity\EntryComment {#4094
+user: App\Entity\User {#4078 …}
+entry: App\Entity\Entry {#2412
+user: Proxies\__CG__\App\Entity\User {#1970 …}
+magazine: App\Entity\Magazine {#266
+icon: Proxies\__CG__\App\Entity\Image {#247 …}
+name: "selfhosted@lemmy.world"
+title: "selfhosted"
+description: """
A place to share alternatives to popular online services that can be self-hosted without giving up privacy or locking you into a service you don’t control.\n
\n
Rules:\n
\n
- Be civil: we’re here to support and learn from one another. Insults won’t be tolerated. Flame wars are frowned upon.\n
- No spam posting.\n
- Don’t duplicate the full text of your blog or github here. Just post the link for folks to click.\n
- Submission headline should match the article title (don’t cherry-pick information from the title to fit your agenda).\n
- No trolling.\n
\n
Resources:\n
\n
- [awesome-selfhosted software](https://github.com/awesome-selfhosted/awesome-selfhosted)\n
- [awesome-sysadmin](https://github.com/awesome-foss/awesome-sysadmin) resources\n
- [Self-Hosted Podcast from Jupiter Broadcasting](https://selfhosted.show)\n
\n
> Any issues on the community? Report it using the report flag.\n
\n
> Questions? DM the mods!
"""
+rules: null
+subscriptionsCount: 1
+entryCount: 222
+entryCommentCount: 3916
+postCount: 0
+postCommentCount: 0
+isAdult: false
+customCss: null
+lastActive: DateTime @1729582735 {#276
date: 2024-10-22 09:38:55.0 +02:00
}
+markedForDeletionAt: null
+tags: null
+moderators: Doctrine\ORM\PersistentCollection {#238 …}
+ownershipRequests: Doctrine\ORM\PersistentCollection {#234 …}
+moderatorRequests: Doctrine\ORM\PersistentCollection {#223 …}
+entries: Doctrine\ORM\PersistentCollection {#181 …}
+posts: Doctrine\ORM\PersistentCollection {#139 …}
+subscriptions: Doctrine\ORM\PersistentCollection {#201 …}
+bans: Doctrine\ORM\PersistentCollection {#118 …}
+reports: Doctrine\ORM\PersistentCollection {#104 …}
+badges: Doctrine\ORM\PersistentCollection {#82 …}
+logs: Doctrine\ORM\PersistentCollection {#72 …}
+awards: Doctrine\ORM\PersistentCollection {#61 …}
+categories: Doctrine\ORM\PersistentCollection {#1820 …}
-id: 120
+apId: "selfhosted@lemmy.world"
+apProfileId: "https://lemmy.world/c/selfhosted"
+apPublicUrl: "https://lemmy.world/c/selfhosted"
+apFollowersUrl: "https://lemmy.world/c/selfhosted/followers"
+apInboxUrl: "https://lemmy.world/inbox"
+apDomain: "lemmy.world"
+apPreferredUsername: "selfhosted"
+apDiscoverable: true
+apManuallyApprovesFollowers: null
+privateKey: null
+publicKey: null
+apFetchedAt: DateTime @1703473826 {#270
date: 2023-12-25 04:10:26.0 +01:00
}
+apDeletedAt: null
+apTimeoutAt: null
+visibility: "visible "
+createdAt: DateTimeImmutable @1703473826 {#272
date: 2023-12-25 04:10:26.0 +01:00
}
}
+image: null
+domain: Proxies\__CG__\App\Entity\Domain {#1915 …}
+slug: "Question-ZFS-and-rsync"
+title: "Question - ZFS and rsync"
+url: null
+body: """
Hey fellow Selfhosters! I need some help, I think, and searching isn’t yielding what I’m hoping for.\n
\n
I recently built a new NAS for my network with 4x 18TB drives in a ZFS raidz1 pool. I previously have been using an external USB 12TB harddrive attached to a different machine.\n
\n
I’ve been attempting to use rsync to get the 12TB drive copied over to the new pool and things go great for the first 30-45 minutes. At that point, the current copy speed diminishes and 4 current files in progress sit at 100% done. Eventually, I’ve had to reboot the machine, because the zpool doesn’t appear accessible any longer. After reboot, the pool appears fine, no faults, and I can resume rsync for a while.\n
\n
EDIT: Of note, the rsync process seems to stall and I can’t get it to respect SIGINT or Ctrl+C. I can SSH in separately and running `zpool status` hangs with no output.\n
\n
While the workaround seems to be partially successful, the point of using rsync is to make it fairly hands-free and it’s been a week long process to copy the 3TB that I have now. I don’t think my zpool should be disappearing like that! Makes me nervous about the long-term viability. I don’t think I’m ready to drop down on Unraid.\n
\n
rsync is being initiated from the NAS to copy from the old server, am I better off “pushing” than “pulling”? I can’t imagine it’d make much difference.\n
\n
Could my drives be bad? How could I tell? They’re attached to a 10 port SATA card, could that be defective? How would I tell?\n
\n
Thanks for any help! I’ve dabbled in linux for a long time, but I’m far from proficient, so I don’t really know the intricacies of dmesg et al.
"""
+type: "article"
+lang: "en"
+isOc: false
+hasEmbed: false
+commentCount: 17
+favouriteCount: 36
+score: 0
+isAdult: false
+sticky: false
+lastActive: DateTime @1711558563 {#2410
date: 2024-03-27 17:56:03.0 +01:00
}
+ip: null
+adaAmount: 0
+tags: null
+mentions: null
+comments: Doctrine\ORM\PersistentCollection {#1884 …}
+votes: Doctrine\ORM\PersistentCollection {#1973 …}
+reports: Doctrine\ORM\PersistentCollection {#1959 …}
+favourites: Doctrine\ORM\PersistentCollection {#1927 …}
+notifications: Doctrine\ORM\PersistentCollection {#2442 …}
+badges: Doctrine\ORM\PersistentCollection {#2440 …}
+children: []
-id: 25403
-titleTs: "'question':1 'rsync':4 'zfs':2"
-bodyTs: "'-45':79 '10':278 '100':97 '12tb':46,63 '18tb':31 '30':78 '3tb':198 '4':90 '4x':30 'access':113 'al':320 'appear':112,120 'attach':48,275 'attempt':56 'bad':268 'better':250 'built':22 'c':152 'card':281 'copi':65,86,196,243 'could':264,270,282 'ctrl':151 'current':85,91 'd':260 'dabbl':296 'defect':285 'differ':51,263 'diminish':88 'disappear':211 'dmesg':318 'doesn':110 'done':98 'drive':32,64,266 'drop':231 'edit':132 'et':319 'eventu':99 'extern':44 'fair':183 'far':306 'fault':123 'fellow':2 'file':92 'fine':121 'first':77 'free':186 'get':61,145 'go':73 'great':74 'hand':185 'hands-fre':184 'hang':162 'harddriv':47 'help':7,293 'hey':1 'hope':18 'imagin':258 'initi':238 'intricaci':316 'isn':12 'know':314 'like':212 'linux':298 'long':193,220,301 'long-term':219 'longer':115 'm':17,228,305 'machin':52,106 'make':181,214,261 'minut':80 'much':262 'nas':25,241 'need':5 'nervous':216 'network':28 'new':24,69 'note':134 'old':246 'output':165 'partial':172 'point':83,175 'pool':37,70,119 'port':279 'previous':39 'process':137,194 'profici':308 'progress':94 'pull':254 'push':252 'raidz1':36 're':274 'readi':229 'realli':313 'reboot':104,117 'recent':21 'respect':148 'resum':127 'rsync':59,128,136,178,235 'run':159 'sata':280 'search':11 'seem':138,169 'selfhost':3 'separ':157 'server':247 'sigint':149 'sit':95 'speed':87 'ssh':155 'stall':140 'status':161 'success':173 'tell':272,289 'term':221 'thank':290 'thing':72 'think':9,206,226 'time':302 'unraid':234 'usb':45 'use':42,58,177 've':54,101,295 'viabil':222 'week':192 'workaround':168 'would':287 'yield':14 'zfs':35 'zpool':109,160,208"
+cross: false
+upVotes: 0
+downVotes: 0
+ranking: 1703862554
+visibility: "visible "
+apId: "https://lemmy.world/post/10061829"
+editedAt: DateTimeImmutable @1708796667 {#1850
date: 2024-02-24 18:44:27.0 +01:00
}
+createdAt: DateTimeImmutable @1703776154 {#2420
date: 2023-12-28 16:09:14.0 +01:00
}
}
+magazine: App\Entity\Magazine {#266}
+image: null
+parent: null
+root: null
+body: "The drives in the zpool, are they SMR drives? Slow write speed and disks dropping out are a symptom of I remember correctly"
+lang: "en"
+isAdult: false
+favouriteCount: 4
+score: 0
+lastActive: DateTime @1708802254 {#4100
date: 2024-02-24 20:17:34.0 +01:00
}
+ip: null
+tags: null
+mentions: [
"@isles@lemmy.world"
]
+children: Doctrine\ORM\PersistentCollection {#4086 …}
+nested: Doctrine\ORM\PersistentCollection {#4089 …}
+votes: Doctrine\ORM\PersistentCollection {#4087 …}
+reports: Doctrine\ORM\PersistentCollection {#4085 …}
+favourites: Doctrine\ORM\PersistentCollection {#4084 …}
+notifications: Doctrine\ORM\PersistentCollection {#4075 …}
-id: 260001
-bodyTs: "'correct':23 'disk':14 'drive':2,9 'drop':15 'rememb':22 'slow':10 'smr':8 'speed':12 'symptom':19 'write':11 'zpool':5"
+ranking: 0
+commentCount: 0
+upVotes: 0
+downVotes: 0
+visibility: "visible "
+apId: "https://lemmy.ml/comment/6926204"
+editedAt: null
+createdAt: DateTimeImmutable @1703787810 {#4098
date: 2023-12-28 19:23:30.0 +01:00
}
} |
|
Show voter details
|
11 |
DENIED
|
edit
|
App\Entity\EntryComment {#4094
+user: App\Entity\User {#4078 …}
+entry: App\Entity\Entry {#2412
+user: Proxies\__CG__\App\Entity\User {#1970 …}
+magazine: App\Entity\Magazine {#266
+icon: Proxies\__CG__\App\Entity\Image {#247 …}
+name: "selfhosted@lemmy.world"
+title: "selfhosted"
+description: """
A place to share alternatives to popular online services that can be self-hosted without giving up privacy or locking you into a service you don’t control.\n
\n
Rules:\n
\n
- Be civil: we’re here to support and learn from one another. Insults won’t be tolerated. Flame wars are frowned upon.\n
- No spam posting.\n
- Don’t duplicate the full text of your blog or github here. Just post the link for folks to click.\n
- Submission headline should match the article title (don’t cherry-pick information from the title to fit your agenda).\n
- No trolling.\n
\n
Resources:\n
\n
- [awesome-selfhosted software](https://github.com/awesome-selfhosted/awesome-selfhosted)\n
- [awesome-sysadmin](https://github.com/awesome-foss/awesome-sysadmin) resources\n
- [Self-Hosted Podcast from Jupiter Broadcasting](https://selfhosted.show)\n
\n
> Any issues on the community? Report it using the report flag.\n
\n
> Questions? DM the mods!
"""
+rules: null
+subscriptionsCount: 1
+entryCount: 222
+entryCommentCount: 3916
+postCount: 0
+postCommentCount: 0
+isAdult: false
+customCss: null
+lastActive: DateTime @1729582735 {#276
date: 2024-10-22 09:38:55.0 +02:00
}
+markedForDeletionAt: null
+tags: null
+moderators: Doctrine\ORM\PersistentCollection {#238 …}
+ownershipRequests: Doctrine\ORM\PersistentCollection {#234 …}
+moderatorRequests: Doctrine\ORM\PersistentCollection {#223 …}
+entries: Doctrine\ORM\PersistentCollection {#181 …}
+posts: Doctrine\ORM\PersistentCollection {#139 …}
+subscriptions: Doctrine\ORM\PersistentCollection {#201 …}
+bans: Doctrine\ORM\PersistentCollection {#118 …}
+reports: Doctrine\ORM\PersistentCollection {#104 …}
+badges: Doctrine\ORM\PersistentCollection {#82 …}
+logs: Doctrine\ORM\PersistentCollection {#72 …}
+awards: Doctrine\ORM\PersistentCollection {#61 …}
+categories: Doctrine\ORM\PersistentCollection {#1820 …}
-id: 120
+apId: "selfhosted@lemmy.world"
+apProfileId: "https://lemmy.world/c/selfhosted"
+apPublicUrl: "https://lemmy.world/c/selfhosted"
+apFollowersUrl: "https://lemmy.world/c/selfhosted/followers"
+apInboxUrl: "https://lemmy.world/inbox"
+apDomain: "lemmy.world"
+apPreferredUsername: "selfhosted"
+apDiscoverable: true
+apManuallyApprovesFollowers: null
+privateKey: null
+publicKey: null
+apFetchedAt: DateTime @1703473826 {#270
date: 2023-12-25 04:10:26.0 +01:00
}
+apDeletedAt: null
+apTimeoutAt: null
+visibility: "visible "
+createdAt: DateTimeImmutable @1703473826 {#272
date: 2023-12-25 04:10:26.0 +01:00
}
}
+image: null
+domain: Proxies\__CG__\App\Entity\Domain {#1915 …}
+slug: "Question-ZFS-and-rsync"
+title: "Question - ZFS and rsync"
+url: null
+body: """
Hey fellow Selfhosters! I need some help, I think, and searching isn’t yielding what I’m hoping for.\n
\n
I recently built a new NAS for my network with 4x 18TB drives in a ZFS raidz1 pool. I previously have been using an external USB 12TB harddrive attached to a different machine.\n
\n
I’ve been attempting to use rsync to get the 12TB drive copied over to the new pool and things go great for the first 30-45 minutes. At that point, the current copy speed diminishes and 4 current files in progress sit at 100% done. Eventually, I’ve had to reboot the machine, because the zpool doesn’t appear accessible any longer. After reboot, the pool appears fine, no faults, and I can resume rsync for a while.\n
\n
EDIT: Of note, the rsync process seems to stall and I can’t get it to respect SIGINT or Ctrl+C. I can SSH in separately and running `zpool status` hangs with no output.\n
\n
While the workaround seems to be partially successful, the point of using rsync is to make it fairly hands-free and it’s been a week long process to copy the 3TB that I have now. I don’t think my zpool should be disappearing like that! Makes me nervous about the long-term viability. I don’t think I’m ready to drop down on Unraid.\n
\n
rsync is being initiated from the NAS to copy from the old server, am I better off “pushing” than “pulling”? I can’t imagine it’d make much difference.\n
\n
Could my drives be bad? How could I tell? They’re attached to a 10 port SATA card, could that be defective? How would I tell?\n
\n
Thanks for any help! I’ve dabbled in linux for a long time, but I’m far from proficient, so I don’t really know the intricacies of dmesg et al.
"""
+type: "article"
+lang: "en"
+isOc: false
+hasEmbed: false
+commentCount: 17
+favouriteCount: 36
+score: 0
+isAdult: false
+sticky: false
+lastActive: DateTime @1711558563 {#2410
date: 2024-03-27 17:56:03.0 +01:00
}
+ip: null
+adaAmount: 0
+tags: null
+mentions: null
+comments: Doctrine\ORM\PersistentCollection {#1884 …}
+votes: Doctrine\ORM\PersistentCollection {#1973 …}
+reports: Doctrine\ORM\PersistentCollection {#1959 …}
+favourites: Doctrine\ORM\PersistentCollection {#1927 …}
+notifications: Doctrine\ORM\PersistentCollection {#2442 …}
+badges: Doctrine\ORM\PersistentCollection {#2440 …}
+children: []
-id: 25403
-titleTs: "'question':1 'rsync':4 'zfs':2"
-bodyTs: "'-45':79 '10':278 '100':97 '12tb':46,63 '18tb':31 '30':78 '3tb':198 '4':90 '4x':30 'access':113 'al':320 'appear':112,120 'attach':48,275 'attempt':56 'bad':268 'better':250 'built':22 'c':152 'card':281 'copi':65,86,196,243 'could':264,270,282 'ctrl':151 'current':85,91 'd':260 'dabbl':296 'defect':285 'differ':51,263 'diminish':88 'disappear':211 'dmesg':318 'doesn':110 'done':98 'drive':32,64,266 'drop':231 'edit':132 'et':319 'eventu':99 'extern':44 'fair':183 'far':306 'fault':123 'fellow':2 'file':92 'fine':121 'first':77 'free':186 'get':61,145 'go':73 'great':74 'hand':185 'hands-fre':184 'hang':162 'harddriv':47 'help':7,293 'hey':1 'hope':18 'imagin':258 'initi':238 'intricaci':316 'isn':12 'know':314 'like':212 'linux':298 'long':193,220,301 'long-term':219 'longer':115 'm':17,228,305 'machin':52,106 'make':181,214,261 'minut':80 'much':262 'nas':25,241 'need':5 'nervous':216 'network':28 'new':24,69 'note':134 'old':246 'output':165 'partial':172 'point':83,175 'pool':37,70,119 'port':279 'previous':39 'process':137,194 'profici':308 'progress':94 'pull':254 'push':252 'raidz1':36 're':274 'readi':229 'realli':313 'reboot':104,117 'recent':21 'respect':148 'resum':127 'rsync':59,128,136,178,235 'run':159 'sata':280 'search':11 'seem':138,169 'selfhost':3 'separ':157 'server':247 'sigint':149 'sit':95 'speed':87 'ssh':155 'stall':140 'status':161 'success':173 'tell':272,289 'term':221 'thank':290 'thing':72 'think':9,206,226 'time':302 'unraid':234 'usb':45 'use':42,58,177 've':54,101,295 'viabil':222 'week':192 'workaround':168 'would':287 'yield':14 'zfs':35 'zpool':109,160,208"
+cross: false
+upVotes: 0
+downVotes: 0
+ranking: 1703862554
+visibility: "visible "
+apId: "https://lemmy.world/post/10061829"
+editedAt: DateTimeImmutable @1708796667 {#1850
date: 2024-02-24 18:44:27.0 +01:00
}
+createdAt: DateTimeImmutable @1703776154 {#2420
date: 2023-12-28 16:09:14.0 +01:00
}
}
+magazine: App\Entity\Magazine {#266}
+image: null
+parent: null
+root: null
+body: "The drives in the zpool, are they SMR drives? Slow write speed and disks dropping out are a symptom of I remember correctly"
+lang: "en"
+isAdult: false
+favouriteCount: 4
+score: 0
+lastActive: DateTime @1708802254 {#4100
date: 2024-02-24 20:17:34.0 +01:00
}
+ip: null
+tags: null
+mentions: [
"@isles@lemmy.world"
]
+children: Doctrine\ORM\PersistentCollection {#4086 …}
+nested: Doctrine\ORM\PersistentCollection {#4089 …}
+votes: Doctrine\ORM\PersistentCollection {#4087 …}
+reports: Doctrine\ORM\PersistentCollection {#4085 …}
+favourites: Doctrine\ORM\PersistentCollection {#4084 …}
+notifications: Doctrine\ORM\PersistentCollection {#4075 …}
-id: 260001
-bodyTs: "'correct':23 'disk':14 'drive':2,9 'drop':15 'rememb':22 'slow':10 'smr':8 'speed':12 'symptom':19 'write':11 'zpool':5"
+ranking: 0
+commentCount: 0
+upVotes: 0
+downVotes: 0
+visibility: "visible "
+apId: "https://lemmy.ml/comment/6926204"
+editedAt: null
+createdAt: DateTimeImmutable @1703787810 {#4098
date: 2023-12-28 19:23:30.0 +01:00
}
} |
|
Show voter details
|
12 |
DENIED
|
moderate
|
App\Entity\EntryComment {#4094
+user: App\Entity\User {#4078 …}
+entry: App\Entity\Entry {#2412
+user: Proxies\__CG__\App\Entity\User {#1970 …}
+magazine: App\Entity\Magazine {#266
+icon: Proxies\__CG__\App\Entity\Image {#247 …}
+name: "selfhosted@lemmy.world"
+title: "selfhosted"
+description: """
A place to share alternatives to popular online services that can be self-hosted without giving up privacy or locking you into a service you don’t control.\n
\n
Rules:\n
\n
- Be civil: we’re here to support and learn from one another. Insults won’t be tolerated. Flame wars are frowned upon.\n
- No spam posting.\n
- Don’t duplicate the full text of your blog or github here. Just post the link for folks to click.\n
- Submission headline should match the article title (don’t cherry-pick information from the title to fit your agenda).\n
- No trolling.\n
\n
Resources:\n
\n
- [awesome-selfhosted software](https://github.com/awesome-selfhosted/awesome-selfhosted)\n
- [awesome-sysadmin](https://github.com/awesome-foss/awesome-sysadmin) resources\n
- [Self-Hosted Podcast from Jupiter Broadcasting](https://selfhosted.show)\n
\n
> Any issues on the community? Report it using the report flag.\n
\n
> Questions? DM the mods!
"""
+rules: null
+subscriptionsCount: 1
+entryCount: 222
+entryCommentCount: 3916
+postCount: 0
+postCommentCount: 0
+isAdult: false
+customCss: null
+lastActive: DateTime @1729582735 {#276
date: 2024-10-22 09:38:55.0 +02:00
}
+markedForDeletionAt: null
+tags: null
+moderators: Doctrine\ORM\PersistentCollection {#238 …}
+ownershipRequests: Doctrine\ORM\PersistentCollection {#234 …}
+moderatorRequests: Doctrine\ORM\PersistentCollection {#223 …}
+entries: Doctrine\ORM\PersistentCollection {#181 …}
+posts: Doctrine\ORM\PersistentCollection {#139 …}
+subscriptions: Doctrine\ORM\PersistentCollection {#201 …}
+bans: Doctrine\ORM\PersistentCollection {#118 …}
+reports: Doctrine\ORM\PersistentCollection {#104 …}
+badges: Doctrine\ORM\PersistentCollection {#82 …}
+logs: Doctrine\ORM\PersistentCollection {#72 …}
+awards: Doctrine\ORM\PersistentCollection {#61 …}
+categories: Doctrine\ORM\PersistentCollection {#1820 …}
-id: 120
+apId: "selfhosted@lemmy.world"
+apProfileId: "https://lemmy.world/c/selfhosted"
+apPublicUrl: "https://lemmy.world/c/selfhosted"
+apFollowersUrl: "https://lemmy.world/c/selfhosted/followers"
+apInboxUrl: "https://lemmy.world/inbox"
+apDomain: "lemmy.world"
+apPreferredUsername: "selfhosted"
+apDiscoverable: true
+apManuallyApprovesFollowers: null
+privateKey: null
+publicKey: null
+apFetchedAt: DateTime @1703473826 {#270
date: 2023-12-25 04:10:26.0 +01:00
}
+apDeletedAt: null
+apTimeoutAt: null
+visibility: "visible "
+createdAt: DateTimeImmutable @1703473826 {#272
date: 2023-12-25 04:10:26.0 +01:00
}
}
+image: null
+domain: Proxies\__CG__\App\Entity\Domain {#1915 …}
+slug: "Question-ZFS-and-rsync"
+title: "Question - ZFS and rsync"
+url: null
+body: """
Hey fellow Selfhosters! I need some help, I think, and searching isn’t yielding what I’m hoping for.\n
\n
I recently built a new NAS for my network with 4x 18TB drives in a ZFS raidz1 pool. I previously have been using an external USB 12TB harddrive attached to a different machine.\n
\n
I’ve been attempting to use rsync to get the 12TB drive copied over to the new pool and things go great for the first 30-45 minutes. At that point, the current copy speed diminishes and 4 current files in progress sit at 100% done. Eventually, I’ve had to reboot the machine, because the zpool doesn’t appear accessible any longer. After reboot, the pool appears fine, no faults, and I can resume rsync for a while.\n
\n
EDIT: Of note, the rsync process seems to stall and I can’t get it to respect SIGINT or Ctrl+C. I can SSH in separately and running `zpool status` hangs with no output.\n
\n
While the workaround seems to be partially successful, the point of using rsync is to make it fairly hands-free and it’s been a week long process to copy the 3TB that I have now. I don’t think my zpool should be disappearing like that! Makes me nervous about the long-term viability. I don’t think I’m ready to drop down on Unraid.\n
\n
rsync is being initiated from the NAS to copy from the old server, am I better off “pushing” than “pulling”? I can’t imagine it’d make much difference.\n
\n
Could my drives be bad? How could I tell? They’re attached to a 10 port SATA card, could that be defective? How would I tell?\n
\n
Thanks for any help! I’ve dabbled in linux for a long time, but I’m far from proficient, so I don’t really know the intricacies of dmesg et al.
"""
+type: "article"
+lang: "en"
+isOc: false
+hasEmbed: false
+commentCount: 17
+favouriteCount: 36
+score: 0
+isAdult: false
+sticky: false
+lastActive: DateTime @1711558563 {#2410
date: 2024-03-27 17:56:03.0 +01:00
}
+ip: null
+adaAmount: 0
+tags: null
+mentions: null
+comments: Doctrine\ORM\PersistentCollection {#1884 …}
+votes: Doctrine\ORM\PersistentCollection {#1973 …}
+reports: Doctrine\ORM\PersistentCollection {#1959 …}
+favourites: Doctrine\ORM\PersistentCollection {#1927 …}
+notifications: Doctrine\ORM\PersistentCollection {#2442 …}
+badges: Doctrine\ORM\PersistentCollection {#2440 …}
+children: []
-id: 25403
-titleTs: "'question':1 'rsync':4 'zfs':2"
-bodyTs: "'-45':79 '10':278 '100':97 '12tb':46,63 '18tb':31 '30':78 '3tb':198 '4':90 '4x':30 'access':113 'al':320 'appear':112,120 'attach':48,275 'attempt':56 'bad':268 'better':250 'built':22 'c':152 'card':281 'copi':65,86,196,243 'could':264,270,282 'ctrl':151 'current':85,91 'd':260 'dabbl':296 'defect':285 'differ':51,263 'diminish':88 'disappear':211 'dmesg':318 'doesn':110 'done':98 'drive':32,64,266 'drop':231 'edit':132 'et':319 'eventu':99 'extern':44 'fair':183 'far':306 'fault':123 'fellow':2 'file':92 'fine':121 'first':77 'free':186 'get':61,145 'go':73 'great':74 'hand':185 'hands-fre':184 'hang':162 'harddriv':47 'help':7,293 'hey':1 'hope':18 'imagin':258 'initi':238 'intricaci':316 'isn':12 'know':314 'like':212 'linux':298 'long':193,220,301 'long-term':219 'longer':115 'm':17,228,305 'machin':52,106 'make':181,214,261 'minut':80 'much':262 'nas':25,241 'need':5 'nervous':216 'network':28 'new':24,69 'note':134 'old':246 'output':165 'partial':172 'point':83,175 'pool':37,70,119 'port':279 'previous':39 'process':137,194 'profici':308 'progress':94 'pull':254 'push':252 'raidz1':36 're':274 'readi':229 'realli':313 'reboot':104,117 'recent':21 'respect':148 'resum':127 'rsync':59,128,136,178,235 'run':159 'sata':280 'search':11 'seem':138,169 'selfhost':3 'separ':157 'server':247 'sigint':149 'sit':95 'speed':87 'ssh':155 'stall':140 'status':161 'success':173 'tell':272,289 'term':221 'thank':290 'thing':72 'think':9,206,226 'time':302 'unraid':234 'usb':45 'use':42,58,177 've':54,101,295 'viabil':222 'week':192 'workaround':168 'would':287 'yield':14 'zfs':35 'zpool':109,160,208"
+cross: false
+upVotes: 0
+downVotes: 0
+ranking: 1703862554
+visibility: "visible "
+apId: "https://lemmy.world/post/10061829"
+editedAt: DateTimeImmutable @1708796667 {#1850
date: 2024-02-24 18:44:27.0 +01:00
}
+createdAt: DateTimeImmutable @1703776154 {#2420
date: 2023-12-28 16:09:14.0 +01:00
}
}
+magazine: App\Entity\Magazine {#266}
+image: null
+parent: null
+root: null
+body: "The drives in the zpool, are they SMR drives? Slow write speed and disks dropping out are a symptom of I remember correctly"
+lang: "en"
+isAdult: false
+favouriteCount: 4
+score: 0
+lastActive: DateTime @1708802254 {#4100
date: 2024-02-24 20:17:34.0 +01:00
}
+ip: null
+tags: null
+mentions: [
"@isles@lemmy.world"
]
+children: Doctrine\ORM\PersistentCollection {#4086 …}
+nested: Doctrine\ORM\PersistentCollection {#4089 …}
+votes: Doctrine\ORM\PersistentCollection {#4087 …}
+reports: Doctrine\ORM\PersistentCollection {#4085 …}
+favourites: Doctrine\ORM\PersistentCollection {#4084 …}
+notifications: Doctrine\ORM\PersistentCollection {#4075 …}
-id: 260001
-bodyTs: "'correct':23 'disk':14 'drive':2,9 'drop':15 'rememb':22 'slow':10 'smr':8 'speed':12 'symptom':19 'write':11 'zpool':5"
+ranking: 0
+commentCount: 0
+upVotes: 0
+downVotes: 0
+visibility: "visible "
+apId: "https://lemmy.ml/comment/6926204"
+editedAt: null
+createdAt: DateTimeImmutable @1703787810 {#4098
date: 2023-12-28 19:23:30.0 +01:00
}
} |
|
Show voter details
|
13 |
DENIED
|
ROLE_USER
|
null |
|
Show voter details
|
14 |
DENIED
|
moderate
|
App\Entity\EntryComment {#4542
+user: Proxies\__CG__\App\Entity\User {#1970 …}
+entry: App\Entity\Entry {#2412
+user: Proxies\__CG__\App\Entity\User {#1970 …}
+magazine: App\Entity\Magazine {#266
+icon: Proxies\__CG__\App\Entity\Image {#247 …}
+name: "selfhosted@lemmy.world"
+title: "selfhosted"
+description: """
A place to share alternatives to popular online services that can be self-hosted without giving up privacy or locking you into a service you don’t control.\n
\n
Rules:\n
\n
- Be civil: we’re here to support and learn from one another. Insults won’t be tolerated. Flame wars are frowned upon.\n
- No spam posting.\n
- Don’t duplicate the full text of your blog or github here. Just post the link for folks to click.\n
- Submission headline should match the article title (don’t cherry-pick information from the title to fit your agenda).\n
- No trolling.\n
\n
Resources:\n
\n
- [awesome-selfhosted software](https://github.com/awesome-selfhosted/awesome-selfhosted)\n
- [awesome-sysadmin](https://github.com/awesome-foss/awesome-sysadmin) resources\n
- [Self-Hosted Podcast from Jupiter Broadcasting](https://selfhosted.show)\n
\n
> Any issues on the community? Report it using the report flag.\n
\n
> Questions? DM the mods!
"""
+rules: null
+subscriptionsCount: 1
+entryCount: 222
+entryCommentCount: 3916
+postCount: 0
+postCommentCount: 0
+isAdult: false
+customCss: null
+lastActive: DateTime @1729582735 {#276
date: 2024-10-22 09:38:55.0 +02:00
}
+markedForDeletionAt: null
+tags: null
+moderators: Doctrine\ORM\PersistentCollection {#238 …}
+ownershipRequests: Doctrine\ORM\PersistentCollection {#234 …}
+moderatorRequests: Doctrine\ORM\PersistentCollection {#223 …}
+entries: Doctrine\ORM\PersistentCollection {#181 …}
+posts: Doctrine\ORM\PersistentCollection {#139 …}
+subscriptions: Doctrine\ORM\PersistentCollection {#201 …}
+bans: Doctrine\ORM\PersistentCollection {#118 …}
+reports: Doctrine\ORM\PersistentCollection {#104 …}
+badges: Doctrine\ORM\PersistentCollection {#82 …}
+logs: Doctrine\ORM\PersistentCollection {#72 …}
+awards: Doctrine\ORM\PersistentCollection {#61 …}
+categories: Doctrine\ORM\PersistentCollection {#1820 …}
-id: 120
+apId: "selfhosted@lemmy.world"
+apProfileId: "https://lemmy.world/c/selfhosted"
+apPublicUrl: "https://lemmy.world/c/selfhosted"
+apFollowersUrl: "https://lemmy.world/c/selfhosted/followers"
+apInboxUrl: "https://lemmy.world/inbox"
+apDomain: "lemmy.world"
+apPreferredUsername: "selfhosted"
+apDiscoverable: true
+apManuallyApprovesFollowers: null
+privateKey: null
+publicKey: null
+apFetchedAt: DateTime @1703473826 {#270
date: 2023-12-25 04:10:26.0 +01:00
}
+apDeletedAt: null
+apTimeoutAt: null
+visibility: "visible "
+createdAt: DateTimeImmutable @1703473826 {#272
date: 2023-12-25 04:10:26.0 +01:00
}
}
+image: null
+domain: Proxies\__CG__\App\Entity\Domain {#1915 …}
+slug: "Question-ZFS-and-rsync"
+title: "Question - ZFS and rsync"
+url: null
+body: """
Hey fellow Selfhosters! I need some help, I think, and searching isn’t yielding what I’m hoping for.\n
\n
I recently built a new NAS for my network with 4x 18TB drives in a ZFS raidz1 pool. I previously have been using an external USB 12TB harddrive attached to a different machine.\n
\n
I’ve been attempting to use rsync to get the 12TB drive copied over to the new pool and things go great for the first 30-45 minutes. At that point, the current copy speed diminishes and 4 current files in progress sit at 100% done. Eventually, I’ve had to reboot the machine, because the zpool doesn’t appear accessible any longer. After reboot, the pool appears fine, no faults, and I can resume rsync for a while.\n
\n
EDIT: Of note, the rsync process seems to stall and I can’t get it to respect SIGINT or Ctrl+C. I can SSH in separately and running `zpool status` hangs with no output.\n
\n
While the workaround seems to be partially successful, the point of using rsync is to make it fairly hands-free and it’s been a week long process to copy the 3TB that I have now. I don’t think my zpool should be disappearing like that! Makes me nervous about the long-term viability. I don’t think I’m ready to drop down on Unraid.\n
\n
rsync is being initiated from the NAS to copy from the old server, am I better off “pushing” than “pulling”? I can’t imagine it’d make much difference.\n
\n
Could my drives be bad? How could I tell? They’re attached to a 10 port SATA card, could that be defective? How would I tell?\n
\n
Thanks for any help! I’ve dabbled in linux for a long time, but I’m far from proficient, so I don’t really know the intricacies of dmesg et al.
"""
+type: "article"
+lang: "en"
+isOc: false
+hasEmbed: false
+commentCount: 17
+favouriteCount: 36
+score: 0
+isAdult: false
+sticky: false
+lastActive: DateTime @1711558563 {#2410
date: 2024-03-27 17:56:03.0 +01:00
}
+ip: null
+adaAmount: 0
+tags: null
+mentions: null
+comments: Doctrine\ORM\PersistentCollection {#1884 …}
+votes: Doctrine\ORM\PersistentCollection {#1973 …}
+reports: Doctrine\ORM\PersistentCollection {#1959 …}
+favourites: Doctrine\ORM\PersistentCollection {#1927 …}
+notifications: Doctrine\ORM\PersistentCollection {#2442 …}
+badges: Doctrine\ORM\PersistentCollection {#2440 …}
+children: []
-id: 25403
-titleTs: "'question':1 'rsync':4 'zfs':2"
-bodyTs: "'-45':79 '10':278 '100':97 '12tb':46,63 '18tb':31 '30':78 '3tb':198 '4':90 '4x':30 'access':113 'al':320 'appear':112,120 'attach':48,275 'attempt':56 'bad':268 'better':250 'built':22 'c':152 'card':281 'copi':65,86,196,243 'could':264,270,282 'ctrl':151 'current':85,91 'd':260 'dabbl':296 'defect':285 'differ':51,263 'diminish':88 'disappear':211 'dmesg':318 'doesn':110 'done':98 'drive':32,64,266 'drop':231 'edit':132 'et':319 'eventu':99 'extern':44 'fair':183 'far':306 'fault':123 'fellow':2 'file':92 'fine':121 'first':77 'free':186 'get':61,145 'go':73 'great':74 'hand':185 'hands-fre':184 'hang':162 'harddriv':47 'help':7,293 'hey':1 'hope':18 'imagin':258 'initi':238 'intricaci':316 'isn':12 'know':314 'like':212 'linux':298 'long':193,220,301 'long-term':219 'longer':115 'm':17,228,305 'machin':52,106 'make':181,214,261 'minut':80 'much':262 'nas':25,241 'need':5 'nervous':216 'network':28 'new':24,69 'note':134 'old':246 'output':165 'partial':172 'point':83,175 'pool':37,70,119 'port':279 'previous':39 'process':137,194 'profici':308 'progress':94 'pull':254 'push':252 'raidz1':36 're':274 'readi':229 'realli':313 'reboot':104,117 'recent':21 'respect':148 'resum':127 'rsync':59,128,136,178,235 'run':159 'sata':280 'search':11 'seem':138,169 'selfhost':3 'separ':157 'server':247 'sigint':149 'sit':95 'speed':87 'ssh':155 'stall':140 'status':161 'success':173 'tell':272,289 'term':221 'thank':290 'thing':72 'think':9,206,226 'time':302 'unraid':234 'usb':45 'use':42,58,177 've':54,101,295 'viabil':222 'week':192 'workaround':168 'would':287 'yield':14 'zfs':35 'zpool':109,160,208"
+cross: false
+upVotes: 0
+downVotes: 0
+ranking: 1703862554
+visibility: "visible "
+apId: "https://lemmy.world/post/10061829"
+editedAt: DateTimeImmutable @1708796667 {#1850
date: 2024-02-24 18:44:27.0 +01:00
}
+createdAt: DateTimeImmutable @1703776154 {#2420
date: 2023-12-28 16:09:14.0 +01:00
}
}
+magazine: App\Entity\Magazine {#266}
+image: null
+parent: App\Entity\EntryComment {#4094
+user: App\Entity\User {#4078 …}
+entry: App\Entity\Entry {#2412}
+magazine: App\Entity\Magazine {#266}
+image: null
+parent: null
+root: null
+body: "The drives in the zpool, are they SMR drives? Slow write speed and disks dropping out are a symptom of I remember correctly"
+lang: "en"
+isAdult: false
+favouriteCount: 4
+score: 0
+lastActive: DateTime @1708802254 {#4100
date: 2024-02-24 20:17:34.0 +01:00
}
+ip: null
+tags: null
+mentions: [
"@isles@lemmy.world"
]
+children: Doctrine\ORM\PersistentCollection {#4086 …}
+nested: Doctrine\ORM\PersistentCollection {#4089 …}
+votes: Doctrine\ORM\PersistentCollection {#4087 …}
+reports: Doctrine\ORM\PersistentCollection {#4085 …}
+favourites: Doctrine\ORM\PersistentCollection {#4084 …}
+notifications: Doctrine\ORM\PersistentCollection {#4075 …}
-id: 260001
-bodyTs: "'correct':23 'disk':14 'drive':2,9 'drop':15 'rememb':22 'slow':10 'smr':8 'speed':12 'symptom':19 'write':11 'zpool':5"
+ranking: 0
+commentCount: 0
+upVotes: 0
+downVotes: 0
+visibility: "visible "
+apId: "https://lemmy.ml/comment/6926204"
+editedAt: null
+createdAt: DateTimeImmutable @1703787810 {#4098
date: 2023-12-28 19:23:30.0 +01:00
}
}
+root: App\Entity\EntryComment {#4094}
+body: "They’re Seagate Exos, [www.seagate.com/products/cmr-smr-list/](https://www.seagate.com/products/cmr-smr-list/) and appear to be CMR"
+lang: "en"
+isAdult: false
+favouriteCount: 0
+score: 0
+lastActive: DateTime @1703792151 {#4543
date: 2023-12-28 20:35:51.0 +01:00
}
+ip: null
+tags: null
+mentions: [
"@isles@lemmy.world"
"@martini1992@lemmy.ml"
]
+children: Doctrine\ORM\PersistentCollection {#4537 …}
+nested: Doctrine\ORM\PersistentCollection {#4531 …}
+votes: Doctrine\ORM\PersistentCollection {#4535 …}
+reports: Doctrine\ORM\PersistentCollection {#4546 …}
+favourites: Doctrine\ORM\PersistentCollection {#4548 …}
+notifications: Doctrine\ORM\PersistentCollection {#4550 …}
-id: 260191
-bodyTs: "'/products/cmr-smr-list/](https://www.seagate.com/products/cmr-smr-list/)':7 'appear':9 'cmr':12 'exo':4 're':2 'seagat':3 'www.seagate.com':6 'www.seagate.com/products/cmr-smr-list/](https://www.seagate.com/products/cmr-smr-list/)':5"
+ranking: 0
+commentCount: 0
+upVotes: 0
+downVotes: 0
+visibility: "visible "
+apId: "https://lemmy.world/comment/6281147"
+editedAt: null
+createdAt: DateTimeImmutable @1703792151 {#4540
date: 2023-12-28 20:35:51.0 +01:00
}
} |
|
Show voter details
|
15 |
DENIED
|
edit
|
App\Entity\EntryComment {#4542
+user: Proxies\__CG__\App\Entity\User {#1970 …}
+entry: App\Entity\Entry {#2412
+user: Proxies\__CG__\App\Entity\User {#1970 …}
+magazine: App\Entity\Magazine {#266
+icon: Proxies\__CG__\App\Entity\Image {#247 …}
+name: "selfhosted@lemmy.world"
+title: "selfhosted"
+description: """
A place to share alternatives to popular online services that can be self-hosted without giving up privacy or locking you into a service you don’t control.\n
\n
Rules:\n
\n
- Be civil: we’re here to support and learn from one another. Insults won’t be tolerated. Flame wars are frowned upon.\n
- No spam posting.\n
- Don’t duplicate the full text of your blog or github here. Just post the link for folks to click.\n
- Submission headline should match the article title (don’t cherry-pick information from the title to fit your agenda).\n
- No trolling.\n
\n
Resources:\n
\n
- [awesome-selfhosted software](https://github.com/awesome-selfhosted/awesome-selfhosted)\n
- [awesome-sysadmin](https://github.com/awesome-foss/awesome-sysadmin) resources\n
- [Self-Hosted Podcast from Jupiter Broadcasting](https://selfhosted.show)\n
\n
> Any issues on the community? Report it using the report flag.\n
\n
> Questions? DM the mods!
"""
+rules: null
+subscriptionsCount: 1
+entryCount: 222
+entryCommentCount: 3916
+postCount: 0
+postCommentCount: 0
+isAdult: false
+customCss: null
+lastActive: DateTime @1729582735 {#276
date: 2024-10-22 09:38:55.0 +02:00
}
+markedForDeletionAt: null
+tags: null
+moderators: Doctrine\ORM\PersistentCollection {#238 …}
+ownershipRequests: Doctrine\ORM\PersistentCollection {#234 …}
+moderatorRequests: Doctrine\ORM\PersistentCollection {#223 …}
+entries: Doctrine\ORM\PersistentCollection {#181 …}
+posts: Doctrine\ORM\PersistentCollection {#139 …}
+subscriptions: Doctrine\ORM\PersistentCollection {#201 …}
+bans: Doctrine\ORM\PersistentCollection {#118 …}
+reports: Doctrine\ORM\PersistentCollection {#104 …}
+badges: Doctrine\ORM\PersistentCollection {#82 …}
+logs: Doctrine\ORM\PersistentCollection {#72 …}
+awards: Doctrine\ORM\PersistentCollection {#61 …}
+categories: Doctrine\ORM\PersistentCollection {#1820 …}
-id: 120
+apId: "selfhosted@lemmy.world"
+apProfileId: "https://lemmy.world/c/selfhosted"
+apPublicUrl: "https://lemmy.world/c/selfhosted"
+apFollowersUrl: "https://lemmy.world/c/selfhosted/followers"
+apInboxUrl: "https://lemmy.world/inbox"
+apDomain: "lemmy.world"
+apPreferredUsername: "selfhosted"
+apDiscoverable: true
+apManuallyApprovesFollowers: null
+privateKey: null
+publicKey: null
+apFetchedAt: DateTime @1703473826 {#270
date: 2023-12-25 04:10:26.0 +01:00
}
+apDeletedAt: null
+apTimeoutAt: null
+visibility: "visible "
+createdAt: DateTimeImmutable @1703473826 {#272
date: 2023-12-25 04:10:26.0 +01:00
}
}
+image: null
+domain: Proxies\__CG__\App\Entity\Domain {#1915 …}
+slug: "Question-ZFS-and-rsync"
+title: "Question - ZFS and rsync"
+url: null
+body: """
Hey fellow Selfhosters! I need some help, I think, and searching isn’t yielding what I’m hoping for.\n
\n
I recently built a new NAS for my network with 4x 18TB drives in a ZFS raidz1 pool. I previously have been using an external USB 12TB harddrive attached to a different machine.\n
\n
I’ve been attempting to use rsync to get the 12TB drive copied over to the new pool and things go great for the first 30-45 minutes. At that point, the current copy speed diminishes and 4 current files in progress sit at 100% done. Eventually, I’ve had to reboot the machine, because the zpool doesn’t appear accessible any longer. After reboot, the pool appears fine, no faults, and I can resume rsync for a while.\n
\n
EDIT: Of note, the rsync process seems to stall and I can’t get it to respect SIGINT or Ctrl+C. I can SSH in separately and running `zpool status` hangs with no output.\n
\n
While the workaround seems to be partially successful, the point of using rsync is to make it fairly hands-free and it’s been a week long process to copy the 3TB that I have now. I don’t think my zpool should be disappearing like that! Makes me nervous about the long-term viability. I don’t think I’m ready to drop down on Unraid.\n
\n
rsync is being initiated from the NAS to copy from the old server, am I better off “pushing” than “pulling”? I can’t imagine it’d make much difference.\n
\n
Could my drives be bad? How could I tell? They’re attached to a 10 port SATA card, could that be defective? How would I tell?\n
\n
Thanks for any help! I’ve dabbled in linux for a long time, but I’m far from proficient, so I don’t really know the intricacies of dmesg et al.
"""
+type: "article"
+lang: "en"
+isOc: false
+hasEmbed: false
+commentCount: 17
+favouriteCount: 36
+score: 0
+isAdult: false
+sticky: false
+lastActive: DateTime @1711558563 {#2410
date: 2024-03-27 17:56:03.0 +01:00
}
+ip: null
+adaAmount: 0
+tags: null
+mentions: null
+comments: Doctrine\ORM\PersistentCollection {#1884 …}
+votes: Doctrine\ORM\PersistentCollection {#1973 …}
+reports: Doctrine\ORM\PersistentCollection {#1959 …}
+favourites: Doctrine\ORM\PersistentCollection {#1927 …}
+notifications: Doctrine\ORM\PersistentCollection {#2442 …}
+badges: Doctrine\ORM\PersistentCollection {#2440 …}
+children: []
-id: 25403
-titleTs: "'question':1 'rsync':4 'zfs':2"
-bodyTs: "'-45':79 '10':278 '100':97 '12tb':46,63 '18tb':31 '30':78 '3tb':198 '4':90 '4x':30 'access':113 'al':320 'appear':112,120 'attach':48,275 'attempt':56 'bad':268 'better':250 'built':22 'c':152 'card':281 'copi':65,86,196,243 'could':264,270,282 'ctrl':151 'current':85,91 'd':260 'dabbl':296 'defect':285 'differ':51,263 'diminish':88 'disappear':211 'dmesg':318 'doesn':110 'done':98 'drive':32,64,266 'drop':231 'edit':132 'et':319 'eventu':99 'extern':44 'fair':183 'far':306 'fault':123 'fellow':2 'file':92 'fine':121 'first':77 'free':186 'get':61,145 'go':73 'great':74 'hand':185 'hands-fre':184 'hang':162 'harddriv':47 'help':7,293 'hey':1 'hope':18 'imagin':258 'initi':238 'intricaci':316 'isn':12 'know':314 'like':212 'linux':298 'long':193,220,301 'long-term':219 'longer':115 'm':17,228,305 'machin':52,106 'make':181,214,261 'minut':80 'much':262 'nas':25,241 'need':5 'nervous':216 'network':28 'new':24,69 'note':134 'old':246 'output':165 'partial':172 'point':83,175 'pool':37,70,119 'port':279 'previous':39 'process':137,194 'profici':308 'progress':94 'pull':254 'push':252 'raidz1':36 're':274 'readi':229 'realli':313 'reboot':104,117 'recent':21 'respect':148 'resum':127 'rsync':59,128,136,178,235 'run':159 'sata':280 'search':11 'seem':138,169 'selfhost':3 'separ':157 'server':247 'sigint':149 'sit':95 'speed':87 'ssh':155 'stall':140 'status':161 'success':173 'tell':272,289 'term':221 'thank':290 'thing':72 'think':9,206,226 'time':302 'unraid':234 'usb':45 'use':42,58,177 've':54,101,295 'viabil':222 'week':192 'workaround':168 'would':287 'yield':14 'zfs':35 'zpool':109,160,208"
+cross: false
+upVotes: 0
+downVotes: 0
+ranking: 1703862554
+visibility: "visible "
+apId: "https://lemmy.world/post/10061829"
+editedAt: DateTimeImmutable @1708796667 {#1850
date: 2024-02-24 18:44:27.0 +01:00
}
+createdAt: DateTimeImmutable @1703776154 {#2420
date: 2023-12-28 16:09:14.0 +01:00
}
}
+magazine: App\Entity\Magazine {#266}
+image: null
+parent: App\Entity\EntryComment {#4094
+user: App\Entity\User {#4078 …}
+entry: App\Entity\Entry {#2412}
+magazine: App\Entity\Magazine {#266}
+image: null
+parent: null
+root: null
+body: "The drives in the zpool, are they SMR drives? Slow write speed and disks dropping out are a symptom of I remember correctly"
+lang: "en"
+isAdult: false
+favouriteCount: 4
+score: 0
+lastActive: DateTime @1708802254 {#4100
date: 2024-02-24 20:17:34.0 +01:00
}
+ip: null
+tags: null
+mentions: [
"@isles@lemmy.world"
]
+children: Doctrine\ORM\PersistentCollection {#4086 …}
+nested: Doctrine\ORM\PersistentCollection {#4089 …}
+votes: Doctrine\ORM\PersistentCollection {#4087 …}
+reports: Doctrine\ORM\PersistentCollection {#4085 …}
+favourites: Doctrine\ORM\PersistentCollection {#4084 …}
+notifications: Doctrine\ORM\PersistentCollection {#4075 …}
-id: 260001
-bodyTs: "'correct':23 'disk':14 'drive':2,9 'drop':15 'rememb':22 'slow':10 'smr':8 'speed':12 'symptom':19 'write':11 'zpool':5"
+ranking: 0
+commentCount: 0
+upVotes: 0
+downVotes: 0
+visibility: "visible "
+apId: "https://lemmy.ml/comment/6926204"
+editedAt: null
+createdAt: DateTimeImmutable @1703787810 {#4098
date: 2023-12-28 19:23:30.0 +01:00
}
}
+root: App\Entity\EntryComment {#4094}
+body: "They’re Seagate Exos, [www.seagate.com/products/cmr-smr-list/](https://www.seagate.com/products/cmr-smr-list/) and appear to be CMR"
+lang: "en"
+isAdult: false
+favouriteCount: 0
+score: 0
+lastActive: DateTime @1703792151 {#4543
date: 2023-12-28 20:35:51.0 +01:00
}
+ip: null
+tags: null
+mentions: [
"@isles@lemmy.world"
"@martini1992@lemmy.ml"
]
+children: Doctrine\ORM\PersistentCollection {#4537 …}
+nested: Doctrine\ORM\PersistentCollection {#4531 …}
+votes: Doctrine\ORM\PersistentCollection {#4535 …}
+reports: Doctrine\ORM\PersistentCollection {#4546 …}
+favourites: Doctrine\ORM\PersistentCollection {#4548 …}
+notifications: Doctrine\ORM\PersistentCollection {#4550 …}
-id: 260191
-bodyTs: "'/products/cmr-smr-list/](https://www.seagate.com/products/cmr-smr-list/)':7 'appear':9 'cmr':12 'exo':4 're':2 'seagat':3 'www.seagate.com':6 'www.seagate.com/products/cmr-smr-list/](https://www.seagate.com/products/cmr-smr-list/)':5"
+ranking: 0
+commentCount: 0
+upVotes: 0
+downVotes: 0
+visibility: "visible "
+apId: "https://lemmy.world/comment/6281147"
+editedAt: null
+createdAt: DateTimeImmutable @1703792151 {#4540
date: 2023-12-28 20:35:51.0 +01:00
}
} |
|
Show voter details
|
16 |
DENIED
|
moderate
|
App\Entity\EntryComment {#4542
+user: Proxies\__CG__\App\Entity\User {#1970 …}
+entry: App\Entity\Entry {#2412
+user: Proxies\__CG__\App\Entity\User {#1970 …}
+magazine: App\Entity\Magazine {#266
+icon: Proxies\__CG__\App\Entity\Image {#247 …}
+name: "selfhosted@lemmy.world"
+title: "selfhosted"
+description: """
A place to share alternatives to popular online services that can be self-hosted without giving up privacy or locking you into a service you don’t control.\n
\n
Rules:\n
\n
- Be civil: we’re here to support and learn from one another. Insults won’t be tolerated. Flame wars are frowned upon.\n
- No spam posting.\n
- Don’t duplicate the full text of your blog or github here. Just post the link for folks to click.\n
- Submission headline should match the article title (don’t cherry-pick information from the title to fit your agenda).\n
- No trolling.\n
\n
Resources:\n
\n
- [awesome-selfhosted software](https://github.com/awesome-selfhosted/awesome-selfhosted)\n
- [awesome-sysadmin](https://github.com/awesome-foss/awesome-sysadmin) resources\n
- [Self-Hosted Podcast from Jupiter Broadcasting](https://selfhosted.show)\n
\n
> Any issues on the community? Report it using the report flag.\n
\n
> Questions? DM the mods!
"""
+rules: null
+subscriptionsCount: 1
+entryCount: 222
+entryCommentCount: 3916
+postCount: 0
+postCommentCount: 0
+isAdult: false
+customCss: null
+lastActive: DateTime @1729582735 {#276
date: 2024-10-22 09:38:55.0 +02:00
}
+markedForDeletionAt: null
+tags: null
+moderators: Doctrine\ORM\PersistentCollection {#238 …}
+ownershipRequests: Doctrine\ORM\PersistentCollection {#234 …}
+moderatorRequests: Doctrine\ORM\PersistentCollection {#223 …}
+entries: Doctrine\ORM\PersistentCollection {#181 …}
+posts: Doctrine\ORM\PersistentCollection {#139 …}
+subscriptions: Doctrine\ORM\PersistentCollection {#201 …}
+bans: Doctrine\ORM\PersistentCollection {#118 …}
+reports: Doctrine\ORM\PersistentCollection {#104 …}
+badges: Doctrine\ORM\PersistentCollection {#82 …}
+logs: Doctrine\ORM\PersistentCollection {#72 …}
+awards: Doctrine\ORM\PersistentCollection {#61 …}
+categories: Doctrine\ORM\PersistentCollection {#1820 …}
-id: 120
+apId: "selfhosted@lemmy.world"
+apProfileId: "https://lemmy.world/c/selfhosted"
+apPublicUrl: "https://lemmy.world/c/selfhosted"
+apFollowersUrl: "https://lemmy.world/c/selfhosted/followers"
+apInboxUrl: "https://lemmy.world/inbox"
+apDomain: "lemmy.world"
+apPreferredUsername: "selfhosted"
+apDiscoverable: true
+apManuallyApprovesFollowers: null
+privateKey: null
+publicKey: null
+apFetchedAt: DateTime @1703473826 {#270
date: 2023-12-25 04:10:26.0 +01:00
}
+apDeletedAt: null
+apTimeoutAt: null
+visibility: "visible "
+createdAt: DateTimeImmutable @1703473826 {#272
date: 2023-12-25 04:10:26.0 +01:00
}
}
+image: null
+domain: Proxies\__CG__\App\Entity\Domain {#1915 …}
+slug: "Question-ZFS-and-rsync"
+title: "Question - ZFS and rsync"
+url: null
+body: """
Hey fellow Selfhosters! I need some help, I think, and searching isn’t yielding what I’m hoping for.\n
\n
I recently built a new NAS for my network with 4x 18TB drives in a ZFS raidz1 pool. I previously have been using an external USB 12TB harddrive attached to a different machine.\n
\n
I’ve been attempting to use rsync to get the 12TB drive copied over to the new pool and things go great for the first 30-45 minutes. At that point, the current copy speed diminishes and 4 current files in progress sit at 100% done. Eventually, I’ve had to reboot the machine, because the zpool doesn’t appear accessible any longer. After reboot, the pool appears fine, no faults, and I can resume rsync for a while.\n
\n
EDIT: Of note, the rsync process seems to stall and I can’t get it to respect SIGINT or Ctrl+C. I can SSH in separately and running `zpool status` hangs with no output.\n
\n
While the workaround seems to be partially successful, the point of using rsync is to make it fairly hands-free and it’s been a week long process to copy the 3TB that I have now. I don’t think my zpool should be disappearing like that! Makes me nervous about the long-term viability. I don’t think I’m ready to drop down on Unraid.\n
\n
rsync is being initiated from the NAS to copy from the old server, am I better off “pushing” than “pulling”? I can’t imagine it’d make much difference.\n
\n
Could my drives be bad? How could I tell? They’re attached to a 10 port SATA card, could that be defective? How would I tell?\n
\n
Thanks for any help! I’ve dabbled in linux for a long time, but I’m far from proficient, so I don’t really know the intricacies of dmesg et al.
"""
+type: "article"
+lang: "en"
+isOc: false
+hasEmbed: false
+commentCount: 17
+favouriteCount: 36
+score: 0
+isAdult: false
+sticky: false
+lastActive: DateTime @1711558563 {#2410
date: 2024-03-27 17:56:03.0 +01:00
}
+ip: null
+adaAmount: 0
+tags: null
+mentions: null
+comments: Doctrine\ORM\PersistentCollection {#1884 …}
+votes: Doctrine\ORM\PersistentCollection {#1973 …}
+reports: Doctrine\ORM\PersistentCollection {#1959 …}
+favourites: Doctrine\ORM\PersistentCollection {#1927 …}
+notifications: Doctrine\ORM\PersistentCollection {#2442 …}
+badges: Doctrine\ORM\PersistentCollection {#2440 …}
+children: []
-id: 25403
-titleTs: "'question':1 'rsync':4 'zfs':2"
-bodyTs: "'-45':79 '10':278 '100':97 '12tb':46,63 '18tb':31 '30':78 '3tb':198 '4':90 '4x':30 'access':113 'al':320 'appear':112,120 'attach':48,275 'attempt':56 'bad':268 'better':250 'built':22 'c':152 'card':281 'copi':65,86,196,243 'could':264,270,282 'ctrl':151 'current':85,91 'd':260 'dabbl':296 'defect':285 'differ':51,263 'diminish':88 'disappear':211 'dmesg':318 'doesn':110 'done':98 'drive':32,64,266 'drop':231 'edit':132 'et':319 'eventu':99 'extern':44 'fair':183 'far':306 'fault':123 'fellow':2 'file':92 'fine':121 'first':77 'free':186 'get':61,145 'go':73 'great':74 'hand':185 'hands-fre':184 'hang':162 'harddriv':47 'help':7,293 'hey':1 'hope':18 'imagin':258 'initi':238 'intricaci':316 'isn':12 'know':314 'like':212 'linux':298 'long':193,220,301 'long-term':219 'longer':115 'm':17,228,305 'machin':52,106 'make':181,214,261 'minut':80 'much':262 'nas':25,241 'need':5 'nervous':216 'network':28 'new':24,69 'note':134 'old':246 'output':165 'partial':172 'point':83,175 'pool':37,70,119 'port':279 'previous':39 'process':137,194 'profici':308 'progress':94 'pull':254 'push':252 'raidz1':36 're':274 'readi':229 'realli':313 'reboot':104,117 'recent':21 'respect':148 'resum':127 'rsync':59,128,136,178,235 'run':159 'sata':280 'search':11 'seem':138,169 'selfhost':3 'separ':157 'server':247 'sigint':149 'sit':95 'speed':87 'ssh':155 'stall':140 'status':161 'success':173 'tell':272,289 'term':221 'thank':290 'thing':72 'think':9,206,226 'time':302 'unraid':234 'usb':45 'use':42,58,177 've':54,101,295 'viabil':222 'week':192 'workaround':168 'would':287 'yield':14 'zfs':35 'zpool':109,160,208"
+cross: false
+upVotes: 0
+downVotes: 0
+ranking: 1703862554
+visibility: "visible "
+apId: "https://lemmy.world/post/10061829"
+editedAt: DateTimeImmutable @1708796667 {#1850
date: 2024-02-24 18:44:27.0 +01:00
}
+createdAt: DateTimeImmutable @1703776154 {#2420
date: 2023-12-28 16:09:14.0 +01:00
}
}
+magazine: App\Entity\Magazine {#266}
+image: null
+parent: App\Entity\EntryComment {#4094
+user: App\Entity\User {#4078 …}
+entry: App\Entity\Entry {#2412}
+magazine: App\Entity\Magazine {#266}
+image: null
+parent: null
+root: null
+body: "The drives in the zpool, are they SMR drives? Slow write speed and disks dropping out are a symptom of I remember correctly"
+lang: "en"
+isAdult: false
+favouriteCount: 4
+score: 0
+lastActive: DateTime @1708802254 {#4100
date: 2024-02-24 20:17:34.0 +01:00
}
+ip: null
+tags: null
+mentions: [
"@isles@lemmy.world"
]
+children: Doctrine\ORM\PersistentCollection {#4086 …}
+nested: Doctrine\ORM\PersistentCollection {#4089 …}
+votes: Doctrine\ORM\PersistentCollection {#4087 …}
+reports: Doctrine\ORM\PersistentCollection {#4085 …}
+favourites: Doctrine\ORM\PersistentCollection {#4084 …}
+notifications: Doctrine\ORM\PersistentCollection {#4075 …}
-id: 260001
-bodyTs: "'correct':23 'disk':14 'drive':2,9 'drop':15 'rememb':22 'slow':10 'smr':8 'speed':12 'symptom':19 'write':11 'zpool':5"
+ranking: 0
+commentCount: 0
+upVotes: 0
+downVotes: 0
+visibility: "visible "
+apId: "https://lemmy.ml/comment/6926204"
+editedAt: null
+createdAt: DateTimeImmutable @1703787810 {#4098
date: 2023-12-28 19:23:30.0 +01:00
}
}
+root: App\Entity\EntryComment {#4094}
+body: "They’re Seagate Exos, [www.seagate.com/products/cmr-smr-list/](https://www.seagate.com/products/cmr-smr-list/) and appear to be CMR"
+lang: "en"
+isAdult: false
+favouriteCount: 0
+score: 0
+lastActive: DateTime @1703792151 {#4543
date: 2023-12-28 20:35:51.0 +01:00
}
+ip: null
+tags: null
+mentions: [
"@isles@lemmy.world"
"@martini1992@lemmy.ml"
]
+children: Doctrine\ORM\PersistentCollection {#4537 …}
+nested: Doctrine\ORM\PersistentCollection {#4531 …}
+votes: Doctrine\ORM\PersistentCollection {#4535 …}
+reports: Doctrine\ORM\PersistentCollection {#4546 …}
+favourites: Doctrine\ORM\PersistentCollection {#4548 …}
+notifications: Doctrine\ORM\PersistentCollection {#4550 …}
-id: 260191
-bodyTs: "'/products/cmr-smr-list/](https://www.seagate.com/products/cmr-smr-list/)':7 'appear':9 'cmr':12 'exo':4 're':2 'seagat':3 'www.seagate.com':6 'www.seagate.com/products/cmr-smr-list/](https://www.seagate.com/products/cmr-smr-list/)':5"
+ranking: 0
+commentCount: 0
+upVotes: 0
+downVotes: 0
+visibility: "visible "
+apId: "https://lemmy.world/comment/6281147"
+editedAt: null
+createdAt: DateTimeImmutable @1703792151 {#4540
date: 2023-12-28 20:35:51.0 +01:00
}
} |
|
Show voter details
|
17 |
DENIED
|
ROLE_USER
|
null |
|
Show voter details
|
18 |
DENIED
|
moderate
|
App\Entity\EntryComment {#4634
+user: App\Entity\User {#4078 …}
+entry: App\Entity\Entry {#2412
+user: Proxies\__CG__\App\Entity\User {#1970 …}
+magazine: App\Entity\Magazine {#266
+icon: Proxies\__CG__\App\Entity\Image {#247 …}
+name: "selfhosted@lemmy.world"
+title: "selfhosted"
+description: """
A place to share alternatives to popular online services that can be self-hosted without giving up privacy or locking you into a service you don’t control.\n
\n
Rules:\n
\n
- Be civil: we’re here to support and learn from one another. Insults won’t be tolerated. Flame wars are frowned upon.\n
- No spam posting.\n
- Don’t duplicate the full text of your blog or github here. Just post the link for folks to click.\n
- Submission headline should match the article title (don’t cherry-pick information from the title to fit your agenda).\n
- No trolling.\n
\n
Resources:\n
\n
- [awesome-selfhosted software](https://github.com/awesome-selfhosted/awesome-selfhosted)\n
- [awesome-sysadmin](https://github.com/awesome-foss/awesome-sysadmin) resources\n
- [Self-Hosted Podcast from Jupiter Broadcasting](https://selfhosted.show)\n
\n
> Any issues on the community? Report it using the report flag.\n
\n
> Questions? DM the mods!
"""
+rules: null
+subscriptionsCount: 1
+entryCount: 222
+entryCommentCount: 3916
+postCount: 0
+postCommentCount: 0
+isAdult: false
+customCss: null
+lastActive: DateTime @1729582735 {#276
date: 2024-10-22 09:38:55.0 +02:00
}
+markedForDeletionAt: null
+tags: null
+moderators: Doctrine\ORM\PersistentCollection {#238 …}
+ownershipRequests: Doctrine\ORM\PersistentCollection {#234 …}
+moderatorRequests: Doctrine\ORM\PersistentCollection {#223 …}
+entries: Doctrine\ORM\PersistentCollection {#181 …}
+posts: Doctrine\ORM\PersistentCollection {#139 …}
+subscriptions: Doctrine\ORM\PersistentCollection {#201 …}
+bans: Doctrine\ORM\PersistentCollection {#118 …}
+reports: Doctrine\ORM\PersistentCollection {#104 …}
+badges: Doctrine\ORM\PersistentCollection {#82 …}
+logs: Doctrine\ORM\PersistentCollection {#72 …}
+awards: Doctrine\ORM\PersistentCollection {#61 …}
+categories: Doctrine\ORM\PersistentCollection {#1820 …}
-id: 120
+apId: "selfhosted@lemmy.world"
+apProfileId: "https://lemmy.world/c/selfhosted"
+apPublicUrl: "https://lemmy.world/c/selfhosted"
+apFollowersUrl: "https://lemmy.world/c/selfhosted/followers"
+apInboxUrl: "https://lemmy.world/inbox"
+apDomain: "lemmy.world"
+apPreferredUsername: "selfhosted"
+apDiscoverable: true
+apManuallyApprovesFollowers: null
+privateKey: null
+publicKey: null
+apFetchedAt: DateTime @1703473826 {#270
date: 2023-12-25 04:10:26.0 +01:00
}
+apDeletedAt: null
+apTimeoutAt: null
+visibility: "visible "
+createdAt: DateTimeImmutable @1703473826 {#272
date: 2023-12-25 04:10:26.0 +01:00
}
}
+image: null
+domain: Proxies\__CG__\App\Entity\Domain {#1915 …}
+slug: "Question-ZFS-and-rsync"
+title: "Question - ZFS and rsync"
+url: null
+body: """
Hey fellow Selfhosters! I need some help, I think, and searching isn’t yielding what I’m hoping for.\n
\n
I recently built a new NAS for my network with 4x 18TB drives in a ZFS raidz1 pool. I previously have been using an external USB 12TB harddrive attached to a different machine.\n
\n
I’ve been attempting to use rsync to get the 12TB drive copied over to the new pool and things go great for the first 30-45 minutes. At that point, the current copy speed diminishes and 4 current files in progress sit at 100% done. Eventually, I’ve had to reboot the machine, because the zpool doesn’t appear accessible any longer. After reboot, the pool appears fine, no faults, and I can resume rsync for a while.\n
\n
EDIT: Of note, the rsync process seems to stall and I can’t get it to respect SIGINT or Ctrl+C. I can SSH in separately and running `zpool status` hangs with no output.\n
\n
While the workaround seems to be partially successful, the point of using rsync is to make it fairly hands-free and it’s been a week long process to copy the 3TB that I have now. I don’t think my zpool should be disappearing like that! Makes me nervous about the long-term viability. I don’t think I’m ready to drop down on Unraid.\n
\n
rsync is being initiated from the NAS to copy from the old server, am I better off “pushing” than “pulling”? I can’t imagine it’d make much difference.\n
\n
Could my drives be bad? How could I tell? They’re attached to a 10 port SATA card, could that be defective? How would I tell?\n
\n
Thanks for any help! I’ve dabbled in linux for a long time, but I’m far from proficient, so I don’t really know the intricacies of dmesg et al.
"""
+type: "article"
+lang: "en"
+isOc: false
+hasEmbed: false
+commentCount: 17
+favouriteCount: 36
+score: 0
+isAdult: false
+sticky: false
+lastActive: DateTime @1711558563 {#2410
date: 2024-03-27 17:56:03.0 +01:00
}
+ip: null
+adaAmount: 0
+tags: null
+mentions: null
+comments: Doctrine\ORM\PersistentCollection {#1884 …}
+votes: Doctrine\ORM\PersistentCollection {#1973 …}
+reports: Doctrine\ORM\PersistentCollection {#1959 …}
+favourites: Doctrine\ORM\PersistentCollection {#1927 …}
+notifications: Doctrine\ORM\PersistentCollection {#2442 …}
+badges: Doctrine\ORM\PersistentCollection {#2440 …}
+children: []
-id: 25403
-titleTs: "'question':1 'rsync':4 'zfs':2"
-bodyTs: "'-45':79 '10':278 '100':97 '12tb':46,63 '18tb':31 '30':78 '3tb':198 '4':90 '4x':30 'access':113 'al':320 'appear':112,120 'attach':48,275 'attempt':56 'bad':268 'better':250 'built':22 'c':152 'card':281 'copi':65,86,196,243 'could':264,270,282 'ctrl':151 'current':85,91 'd':260 'dabbl':296 'defect':285 'differ':51,263 'diminish':88 'disappear':211 'dmesg':318 'doesn':110 'done':98 'drive':32,64,266 'drop':231 'edit':132 'et':319 'eventu':99 'extern':44 'fair':183 'far':306 'fault':123 'fellow':2 'file':92 'fine':121 'first':77 'free':186 'get':61,145 'go':73 'great':74 'hand':185 'hands-fre':184 'hang':162 'harddriv':47 'help':7,293 'hey':1 'hope':18 'imagin':258 'initi':238 'intricaci':316 'isn':12 'know':314 'like':212 'linux':298 'long':193,220,301 'long-term':219 'longer':115 'm':17,228,305 'machin':52,106 'make':181,214,261 'minut':80 'much':262 'nas':25,241 'need':5 'nervous':216 'network':28 'new':24,69 'note':134 'old':246 'output':165 'partial':172 'point':83,175 'pool':37,70,119 'port':279 'previous':39 'process':137,194 'profici':308 'progress':94 'pull':254 'push':252 'raidz1':36 're':274 'readi':229 'realli':313 'reboot':104,117 'recent':21 'respect':148 'resum':127 'rsync':59,128,136,178,235 'run':159 'sata':280 'search':11 'seem':138,169 'selfhost':3 'separ':157 'server':247 'sigint':149 'sit':95 'speed':87 'ssh':155 'stall':140 'status':161 'success':173 'tell':272,289 'term':221 'thank':290 'thing':72 'think':9,206,226 'time':302 'unraid':234 'usb':45 'use':42,58,177 've':54,101,295 'viabil':222 'week':192 'workaround':168 'would':287 'yield':14 'zfs':35 'zpool':109,160,208"
+cross: false
+upVotes: 0
+downVotes: 0
+ranking: 1703862554
+visibility: "visible "
+apId: "https://lemmy.world/post/10061829"
+editedAt: DateTimeImmutable @1708796667 {#1850
date: 2024-02-24 18:44:27.0 +01:00
}
+createdAt: DateTimeImmutable @1703776154 {#2420
date: 2023-12-28 16:09:14.0 +01:00
}
}
+magazine: App\Entity\Magazine {#266}
+image: null
+parent: App\Entity\EntryComment {#4542
+user: Proxies\__CG__\App\Entity\User {#1970 …}
+entry: App\Entity\Entry {#2412}
+magazine: App\Entity\Magazine {#266}
+image: null
+parent: App\Entity\EntryComment {#4094
+user: App\Entity\User {#4078 …}
+entry: App\Entity\Entry {#2412}
+magazine: App\Entity\Magazine {#266}
+image: null
+parent: null
+root: null
+body: "The drives in the zpool, are they SMR drives? Slow write speed and disks dropping out are a symptom of I remember correctly"
+lang: "en"
+isAdult: false
+favouriteCount: 4
+score: 0
+lastActive: DateTime @1708802254 {#4100
date: 2024-02-24 20:17:34.0 +01:00
}
+ip: null
+tags: null
+mentions: [
"@isles@lemmy.world"
]
+children: Doctrine\ORM\PersistentCollection {#4086 …}
+nested: Doctrine\ORM\PersistentCollection {#4089 …}
+votes: Doctrine\ORM\PersistentCollection {#4087 …}
+reports: Doctrine\ORM\PersistentCollection {#4085 …}
+favourites: Doctrine\ORM\PersistentCollection {#4084 …}
+notifications: Doctrine\ORM\PersistentCollection {#4075 …}
-id: 260001
-bodyTs: "'correct':23 'disk':14 'drive':2,9 'drop':15 'rememb':22 'slow':10 'smr':8 'speed':12 'symptom':19 'write':11 'zpool':5"
+ranking: 0
+commentCount: 0
+upVotes: 0
+downVotes: 0
+visibility: "visible "
+apId: "https://lemmy.ml/comment/6926204"
+editedAt: null
+createdAt: DateTimeImmutable @1703787810 {#4098
date: 2023-12-28 19:23:30.0 +01:00
}
}
+root: App\Entity\EntryComment {#4094}
+body: "They’re Seagate Exos, [www.seagate.com/products/cmr-smr-list/](https://www.seagate.com/products/cmr-smr-list/) and appear to be CMR"
+lang: "en"
+isAdult: false
+favouriteCount: 0
+score: 0
+lastActive: DateTime @1703792151 {#4543
date: 2023-12-28 20:35:51.0 +01:00
}
+ip: null
+tags: null
+mentions: [
"@isles@lemmy.world"
"@martini1992@lemmy.ml"
]
+children: Doctrine\ORM\PersistentCollection {#4537 …}
+nested: Doctrine\ORM\PersistentCollection {#4531 …}
+votes: Doctrine\ORM\PersistentCollection {#4535 …}
+reports: Doctrine\ORM\PersistentCollection {#4546 …}
+favourites: Doctrine\ORM\PersistentCollection {#4548 …}
+notifications: Doctrine\ORM\PersistentCollection {#4550 …}
-id: 260191
-bodyTs: "'/products/cmr-smr-list/](https://www.seagate.com/products/cmr-smr-list/)':7 'appear':9 'cmr':12 'exo':4 're':2 'seagat':3 'www.seagate.com':6 'www.seagate.com/products/cmr-smr-list/](https://www.seagate.com/products/cmr-smr-list/)':5"
+ranking: 0
+commentCount: 0
+upVotes: 0
+downVotes: 0
+visibility: "visible "
+apId: "https://lemmy.world/comment/6281147"
+editedAt: null
+createdAt: DateTimeImmutable @1703792151 {#4540
date: 2023-12-28 20:35:51.0 +01:00
}
}
+root: App\Entity\EntryComment {#4094}
+body: "So next I’d be checking logs for sata errors, pcie errors and zfs kernel module errors. Anything that could shed light on what’s happening. If the system is locking up could it be some other part of the server with a hardware error, bad ram, out of memory, bad or full boot disk, etc."
+lang: "en"
+isAdult: false
+favouriteCount: 2
+score: 0
+lastActive: DateTime @1703793186 {#4633
date: 2023-12-28 20:53:06.0 +01:00
}
+ip: null
+tags: null
+mentions: [
"@isles@lemmy.world"
"@martini1992@lemmy.ml"
]
+children: Doctrine\ORM\PersistentCollection {#4638 …}
+nested: Doctrine\ORM\PersistentCollection {#4644 …}
+votes: Doctrine\ORM\PersistentCollection {#4640 …}
+reports: Doctrine\ORM\PersistentCollection {#4645 …}
+favourites: Doctrine\ORM\PersistentCollection {#4647 …}
+notifications: Doctrine\ORM\PersistentCollection {#4649 …}
-id: 260245
-bodyTs: "'anyth':18 'bad':46,51 'boot':54 'check':6 'could':20,33 'd':4 'disk':55 'error':10,12,17,45 'etc':56 'full':53 'happen':26 'hardwar':44 'kernel':15 'light':22 'lock':31 'log':7 'memori':50 'modul':16 'next':2 'part':38 'pcie':11 'ram':47 'sata':9 'server':41 'shed':21 'system':29 'zfs':14"
+ranking: 0
+commentCount: 0
+upVotes: 0
+downVotes: 0
+visibility: "visible "
+apId: "https://lemmy.ml/comment/6927741"
+editedAt: null
+createdAt: DateTimeImmutable @1703793186 {#4630
date: 2023-12-28 20:53:06.0 +01:00
}
} |
|
Show voter details
|
19 |
DENIED
|
edit
|
App\Entity\EntryComment {#4634
+user: App\Entity\User {#4078 …}
+entry: App\Entity\Entry {#2412
+user: Proxies\__CG__\App\Entity\User {#1970 …}
+magazine: App\Entity\Magazine {#266
+icon: Proxies\__CG__\App\Entity\Image {#247 …}
+name: "selfhosted@lemmy.world"
+title: "selfhosted"
+description: """
A place to share alternatives to popular online services that can be self-hosted without giving up privacy or locking you into a service you don’t control.\n
\n
Rules:\n
\n
- Be civil: we’re here to support and learn from one another. Insults won’t be tolerated. Flame wars are frowned upon.\n
- No spam posting.\n
- Don’t duplicate the full text of your blog or github here. Just post the link for folks to click.\n
- Submission headline should match the article title (don’t cherry-pick information from the title to fit your agenda).\n
- No trolling.\n
\n
Resources:\n
\n
- [awesome-selfhosted software](https://github.com/awesome-selfhosted/awesome-selfhosted)\n
- [awesome-sysadmin](https://github.com/awesome-foss/awesome-sysadmin) resources\n
- [Self-Hosted Podcast from Jupiter Broadcasting](https://selfhosted.show)\n
\n
> Any issues on the community? Report it using the report flag.\n
\n
> Questions? DM the mods!
"""
+rules: null
+subscriptionsCount: 1
+entryCount: 222
+entryCommentCount: 3916
+postCount: 0
+postCommentCount: 0
+isAdult: false
+customCss: null
+lastActive: DateTime @1729582735 {#276
date: 2024-10-22 09:38:55.0 +02:00
}
+markedForDeletionAt: null
+tags: null
+moderators: Doctrine\ORM\PersistentCollection {#238 …}
+ownershipRequests: Doctrine\ORM\PersistentCollection {#234 …}
+moderatorRequests: Doctrine\ORM\PersistentCollection {#223 …}
+entries: Doctrine\ORM\PersistentCollection {#181 …}
+posts: Doctrine\ORM\PersistentCollection {#139 …}
+subscriptions: Doctrine\ORM\PersistentCollection {#201 …}
+bans: Doctrine\ORM\PersistentCollection {#118 …}
+reports: Doctrine\ORM\PersistentCollection {#104 …}
+badges: Doctrine\ORM\PersistentCollection {#82 …}
+logs: Doctrine\ORM\PersistentCollection {#72 …}
+awards: Doctrine\ORM\PersistentCollection {#61 …}
+categories: Doctrine\ORM\PersistentCollection {#1820 …}
-id: 120
+apId: "selfhosted@lemmy.world"
+apProfileId: "https://lemmy.world/c/selfhosted"
+apPublicUrl: "https://lemmy.world/c/selfhosted"
+apFollowersUrl: "https://lemmy.world/c/selfhosted/followers"
+apInboxUrl: "https://lemmy.world/inbox"
+apDomain: "lemmy.world"
+apPreferredUsername: "selfhosted"
+apDiscoverable: true
+apManuallyApprovesFollowers: null
+privateKey: null
+publicKey: null
+apFetchedAt: DateTime @1703473826 {#270
date: 2023-12-25 04:10:26.0 +01:00
}
+apDeletedAt: null
+apTimeoutAt: null
+visibility: "visible "
+createdAt: DateTimeImmutable @1703473826 {#272
date: 2023-12-25 04:10:26.0 +01:00
}
}
+image: null
+domain: Proxies\__CG__\App\Entity\Domain {#1915 …}
+slug: "Question-ZFS-and-rsync"
+title: "Question - ZFS and rsync"
+url: null
+body: """
Hey fellow Selfhosters! I need some help, I think, and searching isn’t yielding what I’m hoping for.\n
\n
I recently built a new NAS for my network with 4x 18TB drives in a ZFS raidz1 pool. I previously have been using an external USB 12TB harddrive attached to a different machine.\n
\n
I’ve been attempting to use rsync to get the 12TB drive copied over to the new pool and things go great for the first 30-45 minutes. At that point, the current copy speed diminishes and 4 current files in progress sit at 100% done. Eventually, I’ve had to reboot the machine, because the zpool doesn’t appear accessible any longer. After reboot, the pool appears fine, no faults, and I can resume rsync for a while.\n
\n
EDIT: Of note, the rsync process seems to stall and I can’t get it to respect SIGINT or Ctrl+C. I can SSH in separately and running `zpool status` hangs with no output.\n
\n
While the workaround seems to be partially successful, the point of using rsync is to make it fairly hands-free and it’s been a week long process to copy the 3TB that I have now. I don’t think my zpool should be disappearing like that! Makes me nervous about the long-term viability. I don’t think I’m ready to drop down on Unraid.\n
\n
rsync is being initiated from the NAS to copy from the old server, am I better off “pushing” than “pulling”? I can’t imagine it’d make much difference.\n
\n
Could my drives be bad? How could I tell? They’re attached to a 10 port SATA card, could that be defective? How would I tell?\n
\n
Thanks for any help! I’ve dabbled in linux for a long time, but I’m far from proficient, so I don’t really know the intricacies of dmesg et al.
"""
+type: "article"
+lang: "en"
+isOc: false
+hasEmbed: false
+commentCount: 17
+favouriteCount: 36
+score: 0
+isAdult: false
+sticky: false
+lastActive: DateTime @1711558563 {#2410
date: 2024-03-27 17:56:03.0 +01:00
}
+ip: null
+adaAmount: 0
+tags: null
+mentions: null
+comments: Doctrine\ORM\PersistentCollection {#1884 …}
+votes: Doctrine\ORM\PersistentCollection {#1973 …}
+reports: Doctrine\ORM\PersistentCollection {#1959 …}
+favourites: Doctrine\ORM\PersistentCollection {#1927 …}
+notifications: Doctrine\ORM\PersistentCollection {#2442 …}
+badges: Doctrine\ORM\PersistentCollection {#2440 …}
+children: []
-id: 25403
-titleTs: "'question':1 'rsync':4 'zfs':2"
-bodyTs: "'-45':79 '10':278 '100':97 '12tb':46,63 '18tb':31 '30':78 '3tb':198 '4':90 '4x':30 'access':113 'al':320 'appear':112,120 'attach':48,275 'attempt':56 'bad':268 'better':250 'built':22 'c':152 'card':281 'copi':65,86,196,243 'could':264,270,282 'ctrl':151 'current':85,91 'd':260 'dabbl':296 'defect':285 'differ':51,263 'diminish':88 'disappear':211 'dmesg':318 'doesn':110 'done':98 'drive':32,64,266 'drop':231 'edit':132 'et':319 'eventu':99 'extern':44 'fair':183 'far':306 'fault':123 'fellow':2 'file':92 'fine':121 'first':77 'free':186 'get':61,145 'go':73 'great':74 'hand':185 'hands-fre':184 'hang':162 'harddriv':47 'help':7,293 'hey':1 'hope':18 'imagin':258 'initi':238 'intricaci':316 'isn':12 'know':314 'like':212 'linux':298 'long':193,220,301 'long-term':219 'longer':115 'm':17,228,305 'machin':52,106 'make':181,214,261 'minut':80 'much':262 'nas':25,241 'need':5 'nervous':216 'network':28 'new':24,69 'note':134 'old':246 'output':165 'partial':172 'point':83,175 'pool':37,70,119 'port':279 'previous':39 'process':137,194 'profici':308 'progress':94 'pull':254 'push':252 'raidz1':36 're':274 'readi':229 'realli':313 'reboot':104,117 'recent':21 'respect':148 'resum':127 'rsync':59,128,136,178,235 'run':159 'sata':280 'search':11 'seem':138,169 'selfhost':3 'separ':157 'server':247 'sigint':149 'sit':95 'speed':87 'ssh':155 'stall':140 'status':161 'success':173 'tell':272,289 'term':221 'thank':290 'thing':72 'think':9,206,226 'time':302 'unraid':234 'usb':45 'use':42,58,177 've':54,101,295 'viabil':222 'week':192 'workaround':168 'would':287 'yield':14 'zfs':35 'zpool':109,160,208"
+cross: false
+upVotes: 0
+downVotes: 0
+ranking: 1703862554
+visibility: "visible "
+apId: "https://lemmy.world/post/10061829"
+editedAt: DateTimeImmutable @1708796667 {#1850
date: 2024-02-24 18:44:27.0 +01:00
}
+createdAt: DateTimeImmutable @1703776154 {#2420
date: 2023-12-28 16:09:14.0 +01:00
}
}
+magazine: App\Entity\Magazine {#266}
+image: null
+parent: App\Entity\EntryComment {#4542
+user: Proxies\__CG__\App\Entity\User {#1970 …}
+entry: App\Entity\Entry {#2412}
+magazine: App\Entity\Magazine {#266}
+image: null
+parent: App\Entity\EntryComment {#4094
+user: App\Entity\User {#4078 …}
+entry: App\Entity\Entry {#2412}
+magazine: App\Entity\Magazine {#266}
+image: null
+parent: null
+root: null
+body: "The drives in the zpool, are they SMR drives? Slow write speed and disks dropping out are a symptom of I remember correctly"
+lang: "en"
+isAdult: false
+favouriteCount: 4
+score: 0
+lastActive: DateTime @1708802254 {#4100
date: 2024-02-24 20:17:34.0 +01:00
}
+ip: null
+tags: null
+mentions: [
"@isles@lemmy.world"
]
+children: Doctrine\ORM\PersistentCollection {#4086 …}
+nested: Doctrine\ORM\PersistentCollection {#4089 …}
+votes: Doctrine\ORM\PersistentCollection {#4087 …}
+reports: Doctrine\ORM\PersistentCollection {#4085 …}
+favourites: Doctrine\ORM\PersistentCollection {#4084 …}
+notifications: Doctrine\ORM\PersistentCollection {#4075 …}
-id: 260001
-bodyTs: "'correct':23 'disk':14 'drive':2,9 'drop':15 'rememb':22 'slow':10 'smr':8 'speed':12 'symptom':19 'write':11 'zpool':5"
+ranking: 0
+commentCount: 0
+upVotes: 0
+downVotes: 0
+visibility: "visible "
+apId: "https://lemmy.ml/comment/6926204"
+editedAt: null
+createdAt: DateTimeImmutable @1703787810 {#4098
date: 2023-12-28 19:23:30.0 +01:00
}
}
+root: App\Entity\EntryComment {#4094}
+body: "They’re Seagate Exos, [www.seagate.com/products/cmr-smr-list/](https://www.seagate.com/products/cmr-smr-list/) and appear to be CMR"
+lang: "en"
+isAdult: false
+favouriteCount: 0
+score: 0
+lastActive: DateTime @1703792151 {#4543
date: 2023-12-28 20:35:51.0 +01:00
}
+ip: null
+tags: null
+mentions: [
"@isles@lemmy.world"
"@martini1992@lemmy.ml"
]
+children: Doctrine\ORM\PersistentCollection {#4537 …}
+nested: Doctrine\ORM\PersistentCollection {#4531 …}
+votes: Doctrine\ORM\PersistentCollection {#4535 …}
+reports: Doctrine\ORM\PersistentCollection {#4546 …}
+favourites: Doctrine\ORM\PersistentCollection {#4548 …}
+notifications: Doctrine\ORM\PersistentCollection {#4550 …}
-id: 260191
-bodyTs: "'/products/cmr-smr-list/](https://www.seagate.com/products/cmr-smr-list/)':7 'appear':9 'cmr':12 'exo':4 're':2 'seagat':3 'www.seagate.com':6 'www.seagate.com/products/cmr-smr-list/](https://www.seagate.com/products/cmr-smr-list/)':5"
+ranking: 0
+commentCount: 0
+upVotes: 0
+downVotes: 0
+visibility: "visible "
+apId: "https://lemmy.world/comment/6281147"
+editedAt: null
+createdAt: DateTimeImmutable @1703792151 {#4540
date: 2023-12-28 20:35:51.0 +01:00
}
}
+root: App\Entity\EntryComment {#4094}
+body: "So next I’d be checking logs for sata errors, pcie errors and zfs kernel module errors. Anything that could shed light on what’s happening. If the system is locking up could it be some other part of the server with a hardware error, bad ram, out of memory, bad or full boot disk, etc."
+lang: "en"
+isAdult: false
+favouriteCount: 2
+score: 0
+lastActive: DateTime @1703793186 {#4633
date: 2023-12-28 20:53:06.0 +01:00
}
+ip: null
+tags: null
+mentions: [
"@isles@lemmy.world"
"@martini1992@lemmy.ml"
]
+children: Doctrine\ORM\PersistentCollection {#4638 …}
+nested: Doctrine\ORM\PersistentCollection {#4644 …}
+votes: Doctrine\ORM\PersistentCollection {#4640 …}
+reports: Doctrine\ORM\PersistentCollection {#4645 …}
+favourites: Doctrine\ORM\PersistentCollection {#4647 …}
+notifications: Doctrine\ORM\PersistentCollection {#4649 …}
-id: 260245
-bodyTs: "'anyth':18 'bad':46,51 'boot':54 'check':6 'could':20,33 'd':4 'disk':55 'error':10,12,17,45 'etc':56 'full':53 'happen':26 'hardwar':44 'kernel':15 'light':22 'lock':31 'log':7 'memori':50 'modul':16 'next':2 'part':38 'pcie':11 'ram':47 'sata':9 'server':41 'shed':21 'system':29 'zfs':14"
+ranking: 0
+commentCount: 0
+upVotes: 0
+downVotes: 0
+visibility: "visible "
+apId: "https://lemmy.ml/comment/6927741"
+editedAt: null
+createdAt: DateTimeImmutable @1703793186 {#4630
date: 2023-12-28 20:53:06.0 +01:00
}
} |
|
Show voter details
|
20 |
DENIED
|
moderate
|
App\Entity\EntryComment {#4634
+user: App\Entity\User {#4078 …}
+entry: App\Entity\Entry {#2412
+user: Proxies\__CG__\App\Entity\User {#1970 …}
+magazine: App\Entity\Magazine {#266
+icon: Proxies\__CG__\App\Entity\Image {#247 …}
+name: "selfhosted@lemmy.world"
+title: "selfhosted"
+description: """
A place to share alternatives to popular online services that can be self-hosted without giving up privacy or locking you into a service you don’t control.\n
\n
Rules:\n
\n
- Be civil: we’re here to support and learn from one another. Insults won’t be tolerated. Flame wars are frowned upon.\n
- No spam posting.\n
- Don’t duplicate the full text of your blog or github here. Just post the link for folks to click.\n
- Submission headline should match the article title (don’t cherry-pick information from the title to fit your agenda).\n
- No trolling.\n
\n
Resources:\n
\n
- [awesome-selfhosted software](https://github.com/awesome-selfhosted/awesome-selfhosted)\n
- [awesome-sysadmin](https://github.com/awesome-foss/awesome-sysadmin) resources\n
- [Self-Hosted Podcast from Jupiter Broadcasting](https://selfhosted.show)\n
\n
> Any issues on the community? Report it using the report flag.\n
\n
> Questions? DM the mods!
"""
+rules: null
+subscriptionsCount: 1
+entryCount: 222
+entryCommentCount: 3916
+postCount: 0
+postCommentCount: 0
+isAdult: false
+customCss: null
+lastActive: DateTime @1729582735 {#276
date: 2024-10-22 09:38:55.0 +02:00
}
+markedForDeletionAt: null
+tags: null
+moderators: Doctrine\ORM\PersistentCollection {#238 …}
+ownershipRequests: Doctrine\ORM\PersistentCollection {#234 …}
+moderatorRequests: Doctrine\ORM\PersistentCollection {#223 …}
+entries: Doctrine\ORM\PersistentCollection {#181 …}
+posts: Doctrine\ORM\PersistentCollection {#139 …}
+subscriptions: Doctrine\ORM\PersistentCollection {#201 …}
+bans: Doctrine\ORM\PersistentCollection {#118 …}
+reports: Doctrine\ORM\PersistentCollection {#104 …}
+badges: Doctrine\ORM\PersistentCollection {#82 …}
+logs: Doctrine\ORM\PersistentCollection {#72 …}
+awards: Doctrine\ORM\PersistentCollection {#61 …}
+categories: Doctrine\ORM\PersistentCollection {#1820 …}
-id: 120
+apId: "selfhosted@lemmy.world"
+apProfileId: "https://lemmy.world/c/selfhosted"
+apPublicUrl: "https://lemmy.world/c/selfhosted"
+apFollowersUrl: "https://lemmy.world/c/selfhosted/followers"
+apInboxUrl: "https://lemmy.world/inbox"
+apDomain: "lemmy.world"
+apPreferredUsername: "selfhosted"
+apDiscoverable: true
+apManuallyApprovesFollowers: null
+privateKey: null
+publicKey: null
+apFetchedAt: DateTime @1703473826 {#270
date: 2023-12-25 04:10:26.0 +01:00
}
+apDeletedAt: null
+apTimeoutAt: null
+visibility: "visible "
+createdAt: DateTimeImmutable @1703473826 {#272
date: 2023-12-25 04:10:26.0 +01:00
}
}
+image: null
+domain: Proxies\__CG__\App\Entity\Domain {#1915 …}
+slug: "Question-ZFS-and-rsync"
+title: "Question - ZFS and rsync"
+url: null
+body: """
Hey fellow Selfhosters! I need some help, I think, and searching isn’t yielding what I’m hoping for.\n
\n
I recently built a new NAS for my network with 4x 18TB drives in a ZFS raidz1 pool. I previously have been using an external USB 12TB harddrive attached to a different machine.\n
\n
I’ve been attempting to use rsync to get the 12TB drive copied over to the new pool and things go great for the first 30-45 minutes. At that point, the current copy speed diminishes and 4 current files in progress sit at 100% done. Eventually, I’ve had to reboot the machine, because the zpool doesn’t appear accessible any longer. After reboot, the pool appears fine, no faults, and I can resume rsync for a while.\n
\n
EDIT: Of note, the rsync process seems to stall and I can’t get it to respect SIGINT or Ctrl+C. I can SSH in separately and running `zpool status` hangs with no output.\n
\n
While the workaround seems to be partially successful, the point of using rsync is to make it fairly hands-free and it’s been a week long process to copy the 3TB that I have now. I don’t think my zpool should be disappearing like that! Makes me nervous about the long-term viability. I don’t think I’m ready to drop down on Unraid.\n
\n
rsync is being initiated from the NAS to copy from the old server, am I better off “pushing” than “pulling”? I can’t imagine it’d make much difference.\n
\n
Could my drives be bad? How could I tell? They’re attached to a 10 port SATA card, could that be defective? How would I tell?\n
\n
Thanks for any help! I’ve dabbled in linux for a long time, but I’m far from proficient, so I don’t really know the intricacies of dmesg et al.
"""
+type: "article"
+lang: "en"
+isOc: false
+hasEmbed: false
+commentCount: 17
+favouriteCount: 36
+score: 0
+isAdult: false
+sticky: false
+lastActive: DateTime @1711558563 {#2410
date: 2024-03-27 17:56:03.0 +01:00
}
+ip: null
+adaAmount: 0
+tags: null
+mentions: null
+comments: Doctrine\ORM\PersistentCollection {#1884 …}
+votes: Doctrine\ORM\PersistentCollection {#1973 …}
+reports: Doctrine\ORM\PersistentCollection {#1959 …}
+favourites: Doctrine\ORM\PersistentCollection {#1927 …}
+notifications: Doctrine\ORM\PersistentCollection {#2442 …}
+badges: Doctrine\ORM\PersistentCollection {#2440 …}
+children: []
-id: 25403
-titleTs: "'question':1 'rsync':4 'zfs':2"
-bodyTs: "'-45':79 '10':278 '100':97 '12tb':46,63 '18tb':31 '30':78 '3tb':198 '4':90 '4x':30 'access':113 'al':320 'appear':112,120 'attach':48,275 'attempt':56 'bad':268 'better':250 'built':22 'c':152 'card':281 'copi':65,86,196,243 'could':264,270,282 'ctrl':151 'current':85,91 'd':260 'dabbl':296 'defect':285 'differ':51,263 'diminish':88 'disappear':211 'dmesg':318 'doesn':110 'done':98 'drive':32,64,266 'drop':231 'edit':132 'et':319 'eventu':99 'extern':44 'fair':183 'far':306 'fault':123 'fellow':2 'file':92 'fine':121 'first':77 'free':186 'get':61,145 'go':73 'great':74 'hand':185 'hands-fre':184 'hang':162 'harddriv':47 'help':7,293 'hey':1 'hope':18 'imagin':258 'initi':238 'intricaci':316 'isn':12 'know':314 'like':212 'linux':298 'long':193,220,301 'long-term':219 'longer':115 'm':17,228,305 'machin':52,106 'make':181,214,261 'minut':80 'much':262 'nas':25,241 'need':5 'nervous':216 'network':28 'new':24,69 'note':134 'old':246 'output':165 'partial':172 'point':83,175 'pool':37,70,119 'port':279 'previous':39 'process':137,194 'profici':308 'progress':94 'pull':254 'push':252 'raidz1':36 're':274 'readi':229 'realli':313 'reboot':104,117 'recent':21 'respect':148 'resum':127 'rsync':59,128,136,178,235 'run':159 'sata':280 'search':11 'seem':138,169 'selfhost':3 'separ':157 'server':247 'sigint':149 'sit':95 'speed':87 'ssh':155 'stall':140 'status':161 'success':173 'tell':272,289 'term':221 'thank':290 'thing':72 'think':9,206,226 'time':302 'unraid':234 'usb':45 'use':42,58,177 've':54,101,295 'viabil':222 'week':192 'workaround':168 'would':287 'yield':14 'zfs':35 'zpool':109,160,208"
+cross: false
+upVotes: 0
+downVotes: 0
+ranking: 1703862554
+visibility: "visible "
+apId: "https://lemmy.world/post/10061829"
+editedAt: DateTimeImmutable @1708796667 {#1850
date: 2024-02-24 18:44:27.0 +01:00
}
+createdAt: DateTimeImmutable @1703776154 {#2420
date: 2023-12-28 16:09:14.0 +01:00
}
}
+magazine: App\Entity\Magazine {#266}
+image: null
+parent: App\Entity\EntryComment {#4542
+user: Proxies\__CG__\App\Entity\User {#1970 …}
+entry: App\Entity\Entry {#2412}
+magazine: App\Entity\Magazine {#266}
+image: null
+parent: App\Entity\EntryComment {#4094
+user: App\Entity\User {#4078 …}
+entry: App\Entity\Entry {#2412}
+magazine: App\Entity\Magazine {#266}
+image: null
+parent: null
+root: null
+body: "The drives in the zpool, are they SMR drives? Slow write speed and disks dropping out are a symptom of I remember correctly"
+lang: "en"
+isAdult: false
+favouriteCount: 4
+score: 0
+lastActive: DateTime @1708802254 {#4100
date: 2024-02-24 20:17:34.0 +01:00
}
+ip: null
+tags: null
+mentions: [
"@isles@lemmy.world"
]
+children: Doctrine\ORM\PersistentCollection {#4086 …}
+nested: Doctrine\ORM\PersistentCollection {#4089 …}
+votes: Doctrine\ORM\PersistentCollection {#4087 …}
+reports: Doctrine\ORM\PersistentCollection {#4085 …}
+favourites: Doctrine\ORM\PersistentCollection {#4084 …}
+notifications: Doctrine\ORM\PersistentCollection {#4075 …}
-id: 260001
-bodyTs: "'correct':23 'disk':14 'drive':2,9 'drop':15 'rememb':22 'slow':10 'smr':8 'speed':12 'symptom':19 'write':11 'zpool':5"
+ranking: 0
+commentCount: 0
+upVotes: 0
+downVotes: 0
+visibility: "visible "
+apId: "https://lemmy.ml/comment/6926204"
+editedAt: null
+createdAt: DateTimeImmutable @1703787810 {#4098
date: 2023-12-28 19:23:30.0 +01:00
}
}
+root: App\Entity\EntryComment {#4094}
+body: "They’re Seagate Exos, [www.seagate.com/products/cmr-smr-list/](https://www.seagate.com/products/cmr-smr-list/) and appear to be CMR"
+lang: "en"
+isAdult: false
+favouriteCount: 0
+score: 0
+lastActive: DateTime @1703792151 {#4543
date: 2023-12-28 20:35:51.0 +01:00
}
+ip: null
+tags: null
+mentions: [
"@isles@lemmy.world"
"@martini1992@lemmy.ml"
]
+children: Doctrine\ORM\PersistentCollection {#4537 …}
+nested: Doctrine\ORM\PersistentCollection {#4531 …}
+votes: Doctrine\ORM\PersistentCollection {#4535 …}
+reports: Doctrine\ORM\PersistentCollection {#4546 …}
+favourites: Doctrine\ORM\PersistentCollection {#4548 …}
+notifications: Doctrine\ORM\PersistentCollection {#4550 …}
-id: 260191
-bodyTs: "'/products/cmr-smr-list/](https://www.seagate.com/products/cmr-smr-list/)':7 'appear':9 'cmr':12 'exo':4 're':2 'seagat':3 'www.seagate.com':6 'www.seagate.com/products/cmr-smr-list/](https://www.seagate.com/products/cmr-smr-list/)':5"
+ranking: 0
+commentCount: 0
+upVotes: 0
+downVotes: 0
+visibility: "visible "
+apId: "https://lemmy.world/comment/6281147"
+editedAt: null
+createdAt: DateTimeImmutable @1703792151 {#4540
date: 2023-12-28 20:35:51.0 +01:00
}
}
+root: App\Entity\EntryComment {#4094}
+body: "So next I’d be checking logs for sata errors, pcie errors and zfs kernel module errors. Anything that could shed light on what’s happening. If the system is locking up could it be some other part of the server with a hardware error, bad ram, out of memory, bad or full boot disk, etc."
+lang: "en"
+isAdult: false
+favouriteCount: 2
+score: 0
+lastActive: DateTime @1703793186 {#4633
date: 2023-12-28 20:53:06.0 +01:00
}
+ip: null
+tags: null
+mentions: [
"@isles@lemmy.world"
"@martini1992@lemmy.ml"
]
+children: Doctrine\ORM\PersistentCollection {#4638 …}
+nested: Doctrine\ORM\PersistentCollection {#4644 …}
+votes: Doctrine\ORM\PersistentCollection {#4640 …}
+reports: Doctrine\ORM\PersistentCollection {#4645 …}
+favourites: Doctrine\ORM\PersistentCollection {#4647 …}
+notifications: Doctrine\ORM\PersistentCollection {#4649 …}
-id: 260245
-bodyTs: "'anyth':18 'bad':46,51 'boot':54 'check':6 'could':20,33 'd':4 'disk':55 'error':10,12,17,45 'etc':56 'full':53 'happen':26 'hardwar':44 'kernel':15 'light':22 'lock':31 'log':7 'memori':50 'modul':16 'next':2 'part':38 'pcie':11 'ram':47 'sata':9 'server':41 'shed':21 'system':29 'zfs':14"
+ranking: 0
+commentCount: 0
+upVotes: 0
+downVotes: 0
+visibility: "visible "
+apId: "https://lemmy.ml/comment/6927741"
+editedAt: null
+createdAt: DateTimeImmutable @1703793186 {#4630
date: 2023-12-28 20:53:06.0 +01:00
}
} |
|
Show voter details
|
21 |
DENIED
|
ROLE_USER
|
null |
|
Show voter details
|
22 |
DENIED
|
moderate
|
App\Entity\EntryComment {#4554
+user: Proxies\__CG__\App\Entity\User {#4555 …}
+entry: App\Entity\Entry {#2412
+user: Proxies\__CG__\App\Entity\User {#1970 …}
+magazine: App\Entity\Magazine {#266
+icon: Proxies\__CG__\App\Entity\Image {#247 …}
+name: "selfhosted@lemmy.world"
+title: "selfhosted"
+description: """
A place to share alternatives to popular online services that can be self-hosted without giving up privacy or locking you into a service you don’t control.\n
\n
Rules:\n
\n
- Be civil: we’re here to support and learn from one another. Insults won’t be tolerated. Flame wars are frowned upon.\n
- No spam posting.\n
- Don’t duplicate the full text of your blog or github here. Just post the link for folks to click.\n
- Submission headline should match the article title (don’t cherry-pick information from the title to fit your agenda).\n
- No trolling.\n
\n
Resources:\n
\n
- [awesome-selfhosted software](https://github.com/awesome-selfhosted/awesome-selfhosted)\n
- [awesome-sysadmin](https://github.com/awesome-foss/awesome-sysadmin) resources\n
- [Self-Hosted Podcast from Jupiter Broadcasting](https://selfhosted.show)\n
\n
> Any issues on the community? Report it using the report flag.\n
\n
> Questions? DM the mods!
"""
+rules: null
+subscriptionsCount: 1
+entryCount: 222
+entryCommentCount: 3916
+postCount: 0
+postCommentCount: 0
+isAdult: false
+customCss: null
+lastActive: DateTime @1729582735 {#276
date: 2024-10-22 09:38:55.0 +02:00
}
+markedForDeletionAt: null
+tags: null
+moderators: Doctrine\ORM\PersistentCollection {#238 …}
+ownershipRequests: Doctrine\ORM\PersistentCollection {#234 …}
+moderatorRequests: Doctrine\ORM\PersistentCollection {#223 …}
+entries: Doctrine\ORM\PersistentCollection {#181 …}
+posts: Doctrine\ORM\PersistentCollection {#139 …}
+subscriptions: Doctrine\ORM\PersistentCollection {#201 …}
+bans: Doctrine\ORM\PersistentCollection {#118 …}
+reports: Doctrine\ORM\PersistentCollection {#104 …}
+badges: Doctrine\ORM\PersistentCollection {#82 …}
+logs: Doctrine\ORM\PersistentCollection {#72 …}
+awards: Doctrine\ORM\PersistentCollection {#61 …}
+categories: Doctrine\ORM\PersistentCollection {#1820 …}
-id: 120
+apId: "selfhosted@lemmy.world"
+apProfileId: "https://lemmy.world/c/selfhosted"
+apPublicUrl: "https://lemmy.world/c/selfhosted"
+apFollowersUrl: "https://lemmy.world/c/selfhosted/followers"
+apInboxUrl: "https://lemmy.world/inbox"
+apDomain: "lemmy.world"
+apPreferredUsername: "selfhosted"
+apDiscoverable: true
+apManuallyApprovesFollowers: null
+privateKey: null
+publicKey: null
+apFetchedAt: DateTime @1703473826 {#270
date: 2023-12-25 04:10:26.0 +01:00
}
+apDeletedAt: null
+apTimeoutAt: null
+visibility: "visible "
+createdAt: DateTimeImmutable @1703473826 {#272
date: 2023-12-25 04:10:26.0 +01:00
}
}
+image: null
+domain: Proxies\__CG__\App\Entity\Domain {#1915 …}
+slug: "Question-ZFS-and-rsync"
+title: "Question - ZFS and rsync"
+url: null
+body: """
Hey fellow Selfhosters! I need some help, I think, and searching isn’t yielding what I’m hoping for.\n
\n
I recently built a new NAS for my network with 4x 18TB drives in a ZFS raidz1 pool. I previously have been using an external USB 12TB harddrive attached to a different machine.\n
\n
I’ve been attempting to use rsync to get the 12TB drive copied over to the new pool and things go great for the first 30-45 minutes. At that point, the current copy speed diminishes and 4 current files in progress sit at 100% done. Eventually, I’ve had to reboot the machine, because the zpool doesn’t appear accessible any longer. After reboot, the pool appears fine, no faults, and I can resume rsync for a while.\n
\n
EDIT: Of note, the rsync process seems to stall and I can’t get it to respect SIGINT or Ctrl+C. I can SSH in separately and running `zpool status` hangs with no output.\n
\n
While the workaround seems to be partially successful, the point of using rsync is to make it fairly hands-free and it’s been a week long process to copy the 3TB that I have now. I don’t think my zpool should be disappearing like that! Makes me nervous about the long-term viability. I don’t think I’m ready to drop down on Unraid.\n
\n
rsync is being initiated from the NAS to copy from the old server, am I better off “pushing” than “pulling”? I can’t imagine it’d make much difference.\n
\n
Could my drives be bad? How could I tell? They’re attached to a 10 port SATA card, could that be defective? How would I tell?\n
\n
Thanks for any help! I’ve dabbled in linux for a long time, but I’m far from proficient, so I don’t really know the intricacies of dmesg et al.
"""
+type: "article"
+lang: "en"
+isOc: false
+hasEmbed: false
+commentCount: 17
+favouriteCount: 36
+score: 0
+isAdult: false
+sticky: false
+lastActive: DateTime @1711558563 {#2410
date: 2024-03-27 17:56:03.0 +01:00
}
+ip: null
+adaAmount: 0
+tags: null
+mentions: null
+comments: Doctrine\ORM\PersistentCollection {#1884 …}
+votes: Doctrine\ORM\PersistentCollection {#1973 …}
+reports: Doctrine\ORM\PersistentCollection {#1959 …}
+favourites: Doctrine\ORM\PersistentCollection {#1927 …}
+notifications: Doctrine\ORM\PersistentCollection {#2442 …}
+badges: Doctrine\ORM\PersistentCollection {#2440 …}
+children: []
-id: 25403
-titleTs: "'question':1 'rsync':4 'zfs':2"
-bodyTs: "'-45':79 '10':278 '100':97 '12tb':46,63 '18tb':31 '30':78 '3tb':198 '4':90 '4x':30 'access':113 'al':320 'appear':112,120 'attach':48,275 'attempt':56 'bad':268 'better':250 'built':22 'c':152 'card':281 'copi':65,86,196,243 'could':264,270,282 'ctrl':151 'current':85,91 'd':260 'dabbl':296 'defect':285 'differ':51,263 'diminish':88 'disappear':211 'dmesg':318 'doesn':110 'done':98 'drive':32,64,266 'drop':231 'edit':132 'et':319 'eventu':99 'extern':44 'fair':183 'far':306 'fault':123 'fellow':2 'file':92 'fine':121 'first':77 'free':186 'get':61,145 'go':73 'great':74 'hand':185 'hands-fre':184 'hang':162 'harddriv':47 'help':7,293 'hey':1 'hope':18 'imagin':258 'initi':238 'intricaci':316 'isn':12 'know':314 'like':212 'linux':298 'long':193,220,301 'long-term':219 'longer':115 'm':17,228,305 'machin':52,106 'make':181,214,261 'minut':80 'much':262 'nas':25,241 'need':5 'nervous':216 'network':28 'new':24,69 'note':134 'old':246 'output':165 'partial':172 'point':83,175 'pool':37,70,119 'port':279 'previous':39 'process':137,194 'profici':308 'progress':94 'pull':254 'push':252 'raidz1':36 're':274 'readi':229 'realli':313 'reboot':104,117 'recent':21 'respect':148 'resum':127 'rsync':59,128,136,178,235 'run':159 'sata':280 'search':11 'seem':138,169 'selfhost':3 'separ':157 'server':247 'sigint':149 'sit':95 'speed':87 'ssh':155 'stall':140 'status':161 'success':173 'tell':272,289 'term':221 'thank':290 'thing':72 'think':9,206,226 'time':302 'unraid':234 'usb':45 'use':42,58,177 've':54,101,295 'viabil':222 'week':192 'workaround':168 'would':287 'yield':14 'zfs':35 'zpool':109,160,208"
+cross: false
+upVotes: 0
+downVotes: 0
+ranking: 1703862554
+visibility: "visible "
+apId: "https://lemmy.world/post/10061829"
+editedAt: DateTimeImmutable @1708796667 {#1850
date: 2024-02-24 18:44:27.0 +01:00
}
+createdAt: DateTimeImmutable @1703776154 {#2420
date: 2023-12-28 16:09:14.0 +01:00
}
}
+magazine: App\Entity\Magazine {#266}
+image: null
+parent: App\Entity\EntryComment {#4094
+user: App\Entity\User {#4078 …}
+entry: App\Entity\Entry {#2412}
+magazine: App\Entity\Magazine {#266}
+image: null
+parent: null
+root: null
+body: "The drives in the zpool, are they SMR drives? Slow write speed and disks dropping out are a symptom of I remember correctly"
+lang: "en"
+isAdult: false
+favouriteCount: 4
+score: 0
+lastActive: DateTime @1708802254 {#4100
date: 2024-02-24 20:17:34.0 +01:00
}
+ip: null
+tags: null
+mentions: [
"@isles@lemmy.world"
]
+children: Doctrine\ORM\PersistentCollection {#4086 …}
+nested: Doctrine\ORM\PersistentCollection {#4089 …}
+votes: Doctrine\ORM\PersistentCollection {#4087 …}
+reports: Doctrine\ORM\PersistentCollection {#4085 …}
+favourites: Doctrine\ORM\PersistentCollection {#4084 …}
+notifications: Doctrine\ORM\PersistentCollection {#4075 …}
-id: 260001
-bodyTs: "'correct':23 'disk':14 'drive':2,9 'drop':15 'rememb':22 'slow':10 'smr':8 'speed':12 'symptom':19 'write':11 'zpool':5"
+ranking: 0
+commentCount: 0
+upVotes: 0
+downVotes: 0
+visibility: "visible "
+apId: "https://lemmy.ml/comment/6926204"
+editedAt: null
+createdAt: DateTimeImmutable @1703787810 {#4098
date: 2023-12-28 19:23:30.0 +01:00
}
}
+root: App\Entity\EntryComment {#4094}
+body: "I don’t think they make SMR drives that big"
+lang: "en"
+isAdult: false
+favouriteCount: 3
+score: 0
+lastActive: DateTime @1703789585 {#4552
date: 2023-12-28 19:53:05.0 +01:00
}
+ip: null
+tags: null
+mentions: [
"@isles@lemmy.world"
"@martini1992@lemmy.ml"
]
+children: Doctrine\ORM\PersistentCollection {#4556 …}
+nested: Doctrine\ORM\PersistentCollection {#4558 …}
+votes: Doctrine\ORM\PersistentCollection {#4560 …}
+reports: Doctrine\ORM\PersistentCollection {#4562 …}
+favourites: Doctrine\ORM\PersistentCollection {#4564 …}
+notifications: Doctrine\ORM\PersistentCollection {#4566 …}
-id: 260073
-bodyTs: "'big':10 'drive':8 'make':6 'smr':7 'think':4"
+ranking: 0
+commentCount: 0
+upVotes: 0
+downVotes: 0
+visibility: "visible "
+apId: "https://thelemmy.club/comment/6473299"
+editedAt: null
+createdAt: DateTimeImmutable @1703789585 {#4553
date: 2023-12-28 19:53:05.0 +01:00
}
} |
|
Show voter details
|
23 |
DENIED
|
edit
|
App\Entity\EntryComment {#4554
+user: Proxies\__CG__\App\Entity\User {#4555 …}
+entry: App\Entity\Entry {#2412
+user: Proxies\__CG__\App\Entity\User {#1970 …}
+magazine: App\Entity\Magazine {#266
+icon: Proxies\__CG__\App\Entity\Image {#247 …}
+name: "selfhosted@lemmy.world"
+title: "selfhosted"
+description: """
A place to share alternatives to popular online services that can be self-hosted without giving up privacy or locking you into a service you don’t control.\n
\n
Rules:\n
\n
- Be civil: we’re here to support and learn from one another. Insults won’t be tolerated. Flame wars are frowned upon.\n
- No spam posting.\n
- Don’t duplicate the full text of your blog or github here. Just post the link for folks to click.\n
- Submission headline should match the article title (don’t cherry-pick information from the title to fit your agenda).\n
- No trolling.\n
\n
Resources:\n
\n
- [awesome-selfhosted software](https://github.com/awesome-selfhosted/awesome-selfhosted)\n
- [awesome-sysadmin](https://github.com/awesome-foss/awesome-sysadmin) resources\n
- [Self-Hosted Podcast from Jupiter Broadcasting](https://selfhosted.show)\n
\n
> Any issues on the community? Report it using the report flag.\n
\n
> Questions? DM the mods!
"""
+rules: null
+subscriptionsCount: 1
+entryCount: 222
+entryCommentCount: 3916
+postCount: 0
+postCommentCount: 0
+isAdult: false
+customCss: null
+lastActive: DateTime @1729582735 {#276
date: 2024-10-22 09:38:55.0 +02:00
}
+markedForDeletionAt: null
+tags: null
+moderators: Doctrine\ORM\PersistentCollection {#238 …}
+ownershipRequests: Doctrine\ORM\PersistentCollection {#234 …}
+moderatorRequests: Doctrine\ORM\PersistentCollection {#223 …}
+entries: Doctrine\ORM\PersistentCollection {#181 …}
+posts: Doctrine\ORM\PersistentCollection {#139 …}
+subscriptions: Doctrine\ORM\PersistentCollection {#201 …}
+bans: Doctrine\ORM\PersistentCollection {#118 …}
+reports: Doctrine\ORM\PersistentCollection {#104 …}
+badges: Doctrine\ORM\PersistentCollection {#82 …}
+logs: Doctrine\ORM\PersistentCollection {#72 …}
+awards: Doctrine\ORM\PersistentCollection {#61 …}
+categories: Doctrine\ORM\PersistentCollection {#1820 …}
-id: 120
+apId: "selfhosted@lemmy.world"
+apProfileId: "https://lemmy.world/c/selfhosted"
+apPublicUrl: "https://lemmy.world/c/selfhosted"
+apFollowersUrl: "https://lemmy.world/c/selfhosted/followers"
+apInboxUrl: "https://lemmy.world/inbox"
+apDomain: "lemmy.world"
+apPreferredUsername: "selfhosted"
+apDiscoverable: true
+apManuallyApprovesFollowers: null
+privateKey: null
+publicKey: null
+apFetchedAt: DateTime @1703473826 {#270
date: 2023-12-25 04:10:26.0 +01:00
}
+apDeletedAt: null
+apTimeoutAt: null
+visibility: "visible "
+createdAt: DateTimeImmutable @1703473826 {#272
date: 2023-12-25 04:10:26.0 +01:00
}
}
+image: null
+domain: Proxies\__CG__\App\Entity\Domain {#1915 …}
+slug: "Question-ZFS-and-rsync"
+title: "Question - ZFS and rsync"
+url: null
+body: """
Hey fellow Selfhosters! I need some help, I think, and searching isn’t yielding what I’m hoping for.\n
\n
I recently built a new NAS for my network with 4x 18TB drives in a ZFS raidz1 pool. I previously have been using an external USB 12TB harddrive attached to a different machine.\n
\n
I’ve been attempting to use rsync to get the 12TB drive copied over to the new pool and things go great for the first 30-45 minutes. At that point, the current copy speed diminishes and 4 current files in progress sit at 100% done. Eventually, I’ve had to reboot the machine, because the zpool doesn’t appear accessible any longer. After reboot, the pool appears fine, no faults, and I can resume rsync for a while.\n
\n
EDIT: Of note, the rsync process seems to stall and I can’t get it to respect SIGINT or Ctrl+C. I can SSH in separately and running `zpool status` hangs with no output.\n
\n
While the workaround seems to be partially successful, the point of using rsync is to make it fairly hands-free and it’s been a week long process to copy the 3TB that I have now. I don’t think my zpool should be disappearing like that! Makes me nervous about the long-term viability. I don’t think I’m ready to drop down on Unraid.\n
\n
rsync is being initiated from the NAS to copy from the old server, am I better off “pushing” than “pulling”? I can’t imagine it’d make much difference.\n
\n
Could my drives be bad? How could I tell? They’re attached to a 10 port SATA card, could that be defective? How would I tell?\n
\n
Thanks for any help! I’ve dabbled in linux for a long time, but I’m far from proficient, so I don’t really know the intricacies of dmesg et al.
"""
+type: "article"
+lang: "en"
+isOc: false
+hasEmbed: false
+commentCount: 17
+favouriteCount: 36
+score: 0
+isAdult: false
+sticky: false
+lastActive: DateTime @1711558563 {#2410
date: 2024-03-27 17:56:03.0 +01:00
}
+ip: null
+adaAmount: 0
+tags: null
+mentions: null
+comments: Doctrine\ORM\PersistentCollection {#1884 …}
+votes: Doctrine\ORM\PersistentCollection {#1973 …}
+reports: Doctrine\ORM\PersistentCollection {#1959 …}
+favourites: Doctrine\ORM\PersistentCollection {#1927 …}
+notifications: Doctrine\ORM\PersistentCollection {#2442 …}
+badges: Doctrine\ORM\PersistentCollection {#2440 …}
+children: []
-id: 25403
-titleTs: "'question':1 'rsync':4 'zfs':2"
-bodyTs: "'-45':79 '10':278 '100':97 '12tb':46,63 '18tb':31 '30':78 '3tb':198 '4':90 '4x':30 'access':113 'al':320 'appear':112,120 'attach':48,275 'attempt':56 'bad':268 'better':250 'built':22 'c':152 'card':281 'copi':65,86,196,243 'could':264,270,282 'ctrl':151 'current':85,91 'd':260 'dabbl':296 'defect':285 'differ':51,263 'diminish':88 'disappear':211 'dmesg':318 'doesn':110 'done':98 'drive':32,64,266 'drop':231 'edit':132 'et':319 'eventu':99 'extern':44 'fair':183 'far':306 'fault':123 'fellow':2 'file':92 'fine':121 'first':77 'free':186 'get':61,145 'go':73 'great':74 'hand':185 'hands-fre':184 'hang':162 'harddriv':47 'help':7,293 'hey':1 'hope':18 'imagin':258 'initi':238 'intricaci':316 'isn':12 'know':314 'like':212 'linux':298 'long':193,220,301 'long-term':219 'longer':115 'm':17,228,305 'machin':52,106 'make':181,214,261 'minut':80 'much':262 'nas':25,241 'need':5 'nervous':216 'network':28 'new':24,69 'note':134 'old':246 'output':165 'partial':172 'point':83,175 'pool':37,70,119 'port':279 'previous':39 'process':137,194 'profici':308 'progress':94 'pull':254 'push':252 'raidz1':36 're':274 'readi':229 'realli':313 'reboot':104,117 'recent':21 'respect':148 'resum':127 'rsync':59,128,136,178,235 'run':159 'sata':280 'search':11 'seem':138,169 'selfhost':3 'separ':157 'server':247 'sigint':149 'sit':95 'speed':87 'ssh':155 'stall':140 'status':161 'success':173 'tell':272,289 'term':221 'thank':290 'thing':72 'think':9,206,226 'time':302 'unraid':234 'usb':45 'use':42,58,177 've':54,101,295 'viabil':222 'week':192 'workaround':168 'would':287 'yield':14 'zfs':35 'zpool':109,160,208"
+cross: false
+upVotes: 0
+downVotes: 0
+ranking: 1703862554
+visibility: "visible "
+apId: "https://lemmy.world/post/10061829"
+editedAt: DateTimeImmutable @1708796667 {#1850
date: 2024-02-24 18:44:27.0 +01:00
}
+createdAt: DateTimeImmutable @1703776154 {#2420
date: 2023-12-28 16:09:14.0 +01:00
}
}
+magazine: App\Entity\Magazine {#266}
+image: null
+parent: App\Entity\EntryComment {#4094
+user: App\Entity\User {#4078 …}
+entry: App\Entity\Entry {#2412}
+magazine: App\Entity\Magazine {#266}
+image: null
+parent: null
+root: null
+body: "The drives in the zpool, are they SMR drives? Slow write speed and disks dropping out are a symptom of I remember correctly"
+lang: "en"
+isAdult: false
+favouriteCount: 4
+score: 0
+lastActive: DateTime @1708802254 {#4100
date: 2024-02-24 20:17:34.0 +01:00
}
+ip: null
+tags: null
+mentions: [
"@isles@lemmy.world"
]
+children: Doctrine\ORM\PersistentCollection {#4086 …}
+nested: Doctrine\ORM\PersistentCollection {#4089 …}
+votes: Doctrine\ORM\PersistentCollection {#4087 …}
+reports: Doctrine\ORM\PersistentCollection {#4085 …}
+favourites: Doctrine\ORM\PersistentCollection {#4084 …}
+notifications: Doctrine\ORM\PersistentCollection {#4075 …}
-id: 260001
-bodyTs: "'correct':23 'disk':14 'drive':2,9 'drop':15 'rememb':22 'slow':10 'smr':8 'speed':12 'symptom':19 'write':11 'zpool':5"
+ranking: 0
+commentCount: 0
+upVotes: 0
+downVotes: 0
+visibility: "visible "
+apId: "https://lemmy.ml/comment/6926204"
+editedAt: null
+createdAt: DateTimeImmutable @1703787810 {#4098
date: 2023-12-28 19:23:30.0 +01:00
}
}
+root: App\Entity\EntryComment {#4094}
+body: "I don’t think they make SMR drives that big"
+lang: "en"
+isAdult: false
+favouriteCount: 3
+score: 0
+lastActive: DateTime @1703789585 {#4552
date: 2023-12-28 19:53:05.0 +01:00
}
+ip: null
+tags: null
+mentions: [
"@isles@lemmy.world"
"@martini1992@lemmy.ml"
]
+children: Doctrine\ORM\PersistentCollection {#4556 …}
+nested: Doctrine\ORM\PersistentCollection {#4558 …}
+votes: Doctrine\ORM\PersistentCollection {#4560 …}
+reports: Doctrine\ORM\PersistentCollection {#4562 …}
+favourites: Doctrine\ORM\PersistentCollection {#4564 …}
+notifications: Doctrine\ORM\PersistentCollection {#4566 …}
-id: 260073
-bodyTs: "'big':10 'drive':8 'make':6 'smr':7 'think':4"
+ranking: 0
+commentCount: 0
+upVotes: 0
+downVotes: 0
+visibility: "visible "
+apId: "https://thelemmy.club/comment/6473299"
+editedAt: null
+createdAt: DateTimeImmutable @1703789585 {#4553
date: 2023-12-28 19:53:05.0 +01:00
}
} |
|
Show voter details
|
24 |
DENIED
|
moderate
|
App\Entity\EntryComment {#4554
+user: Proxies\__CG__\App\Entity\User {#4555 …}
+entry: App\Entity\Entry {#2412
+user: Proxies\__CG__\App\Entity\User {#1970 …}
+magazine: App\Entity\Magazine {#266
+icon: Proxies\__CG__\App\Entity\Image {#247 …}
+name: "selfhosted@lemmy.world"
+title: "selfhosted"
+description: """
A place to share alternatives to popular online services that can be self-hosted without giving up privacy or locking you into a service you don’t control.\n
\n
Rules:\n
\n
- Be civil: we’re here to support and learn from one another. Insults won’t be tolerated. Flame wars are frowned upon.\n
- No spam posting.\n
- Don’t duplicate the full text of your blog or github here. Just post the link for folks to click.\n
- Submission headline should match the article title (don’t cherry-pick information from the title to fit your agenda).\n
- No trolling.\n
\n
Resources:\n
\n
- [awesome-selfhosted software](https://github.com/awesome-selfhosted/awesome-selfhosted)\n
- [awesome-sysadmin](https://github.com/awesome-foss/awesome-sysadmin) resources\n
- [Self-Hosted Podcast from Jupiter Broadcasting](https://selfhosted.show)\n
\n
> Any issues on the community? Report it using the report flag.\n
\n
> Questions? DM the mods!
"""
+rules: null
+subscriptionsCount: 1
+entryCount: 222
+entryCommentCount: 3916
+postCount: 0
+postCommentCount: 0
+isAdult: false
+customCss: null
+lastActive: DateTime @1729582735 {#276
date: 2024-10-22 09:38:55.0 +02:00
}
+markedForDeletionAt: null
+tags: null
+moderators: Doctrine\ORM\PersistentCollection {#238 …}
+ownershipRequests: Doctrine\ORM\PersistentCollection {#234 …}
+moderatorRequests: Doctrine\ORM\PersistentCollection {#223 …}
+entries: Doctrine\ORM\PersistentCollection {#181 …}
+posts: Doctrine\ORM\PersistentCollection {#139 …}
+subscriptions: Doctrine\ORM\PersistentCollection {#201 …}
+bans: Doctrine\ORM\PersistentCollection {#118 …}
+reports: Doctrine\ORM\PersistentCollection {#104 …}
+badges: Doctrine\ORM\PersistentCollection {#82 …}
+logs: Doctrine\ORM\PersistentCollection {#72 …}
+awards: Doctrine\ORM\PersistentCollection {#61 …}
+categories: Doctrine\ORM\PersistentCollection {#1820 …}
-id: 120
+apId: "selfhosted@lemmy.world"
+apProfileId: "https://lemmy.world/c/selfhosted"
+apPublicUrl: "https://lemmy.world/c/selfhosted"
+apFollowersUrl: "https://lemmy.world/c/selfhosted/followers"
+apInboxUrl: "https://lemmy.world/inbox"
+apDomain: "lemmy.world"
+apPreferredUsername: "selfhosted"
+apDiscoverable: true
+apManuallyApprovesFollowers: null
+privateKey: null
+publicKey: null
+apFetchedAt: DateTime @1703473826 {#270
date: 2023-12-25 04:10:26.0 +01:00
}
+apDeletedAt: null
+apTimeoutAt: null
+visibility: "visible "
+createdAt: DateTimeImmutable @1703473826 {#272
date: 2023-12-25 04:10:26.0 +01:00
}
}
+image: null
+domain: Proxies\__CG__\App\Entity\Domain {#1915 …}
+slug: "Question-ZFS-and-rsync"
+title: "Question - ZFS and rsync"
+url: null
+body: """
Hey fellow Selfhosters! I need some help, I think, and searching isn’t yielding what I’m hoping for.\n
\n
I recently built a new NAS for my network with 4x 18TB drives in a ZFS raidz1 pool. I previously have been using an external USB 12TB harddrive attached to a different machine.\n
\n
I’ve been attempting to use rsync to get the 12TB drive copied over to the new pool and things go great for the first 30-45 minutes. At that point, the current copy speed diminishes and 4 current files in progress sit at 100% done. Eventually, I’ve had to reboot the machine, because the zpool doesn’t appear accessible any longer. After reboot, the pool appears fine, no faults, and I can resume rsync for a while.\n
\n
EDIT: Of note, the rsync process seems to stall and I can’t get it to respect SIGINT or Ctrl+C. I can SSH in separately and running `zpool status` hangs with no output.\n
\n
While the workaround seems to be partially successful, the point of using rsync is to make it fairly hands-free and it’s been a week long process to copy the 3TB that I have now. I don’t think my zpool should be disappearing like that! Makes me nervous about the long-term viability. I don’t think I’m ready to drop down on Unraid.\n
\n
rsync is being initiated from the NAS to copy from the old server, am I better off “pushing” than “pulling”? I can’t imagine it’d make much difference.\n
\n
Could my drives be bad? How could I tell? They’re attached to a 10 port SATA card, could that be defective? How would I tell?\n
\n
Thanks for any help! I’ve dabbled in linux for a long time, but I’m far from proficient, so I don’t really know the intricacies of dmesg et al.
"""
+type: "article"
+lang: "en"
+isOc: false
+hasEmbed: false
+commentCount: 17
+favouriteCount: 36
+score: 0
+isAdult: false
+sticky: false
+lastActive: DateTime @1711558563 {#2410
date: 2024-03-27 17:56:03.0 +01:00
}
+ip: null
+adaAmount: 0
+tags: null
+mentions: null
+comments: Doctrine\ORM\PersistentCollection {#1884 …}
+votes: Doctrine\ORM\PersistentCollection {#1973 …}
+reports: Doctrine\ORM\PersistentCollection {#1959 …}
+favourites: Doctrine\ORM\PersistentCollection {#1927 …}
+notifications: Doctrine\ORM\PersistentCollection {#2442 …}
+badges: Doctrine\ORM\PersistentCollection {#2440 …}
+children: []
-id: 25403
-titleTs: "'question':1 'rsync':4 'zfs':2"
-bodyTs: "'-45':79 '10':278 '100':97 '12tb':46,63 '18tb':31 '30':78 '3tb':198 '4':90 '4x':30 'access':113 'al':320 'appear':112,120 'attach':48,275 'attempt':56 'bad':268 'better':250 'built':22 'c':152 'card':281 'copi':65,86,196,243 'could':264,270,282 'ctrl':151 'current':85,91 'd':260 'dabbl':296 'defect':285 'differ':51,263 'diminish':88 'disappear':211 'dmesg':318 'doesn':110 'done':98 'drive':32,64,266 'drop':231 'edit':132 'et':319 'eventu':99 'extern':44 'fair':183 'far':306 'fault':123 'fellow':2 'file':92 'fine':121 'first':77 'free':186 'get':61,145 'go':73 'great':74 'hand':185 'hands-fre':184 'hang':162 'harddriv':47 'help':7,293 'hey':1 'hope':18 'imagin':258 'initi':238 'intricaci':316 'isn':12 'know':314 'like':212 'linux':298 'long':193,220,301 'long-term':219 'longer':115 'm':17,228,305 'machin':52,106 'make':181,214,261 'minut':80 'much':262 'nas':25,241 'need':5 'nervous':216 'network':28 'new':24,69 'note':134 'old':246 'output':165 'partial':172 'point':83,175 'pool':37,70,119 'port':279 'previous':39 'process':137,194 'profici':308 'progress':94 'pull':254 'push':252 'raidz1':36 're':274 'readi':229 'realli':313 'reboot':104,117 'recent':21 'respect':148 'resum':127 'rsync':59,128,136,178,235 'run':159 'sata':280 'search':11 'seem':138,169 'selfhost':3 'separ':157 'server':247 'sigint':149 'sit':95 'speed':87 'ssh':155 'stall':140 'status':161 'success':173 'tell':272,289 'term':221 'thank':290 'thing':72 'think':9,206,226 'time':302 'unraid':234 'usb':45 'use':42,58,177 've':54,101,295 'viabil':222 'week':192 'workaround':168 'would':287 'yield':14 'zfs':35 'zpool':109,160,208"
+cross: false
+upVotes: 0
+downVotes: 0
+ranking: 1703862554
+visibility: "visible "
+apId: "https://lemmy.world/post/10061829"
+editedAt: DateTimeImmutable @1708796667 {#1850
date: 2024-02-24 18:44:27.0 +01:00
}
+createdAt: DateTimeImmutable @1703776154 {#2420
date: 2023-12-28 16:09:14.0 +01:00
}
}
+magazine: App\Entity\Magazine {#266}
+image: null
+parent: App\Entity\EntryComment {#4094
+user: App\Entity\User {#4078 …}
+entry: App\Entity\Entry {#2412}
+magazine: App\Entity\Magazine {#266}
+image: null
+parent: null
+root: null
+body: "The drives in the zpool, are they SMR drives? Slow write speed and disks dropping out are a symptom of I remember correctly"
+lang: "en"
+isAdult: false
+favouriteCount: 4
+score: 0
+lastActive: DateTime @1708802254 {#4100
date: 2024-02-24 20:17:34.0 +01:00
}
+ip: null
+tags: null
+mentions: [
"@isles@lemmy.world"
]
+children: Doctrine\ORM\PersistentCollection {#4086 …}
+nested: Doctrine\ORM\PersistentCollection {#4089 …}
+votes: Doctrine\ORM\PersistentCollection {#4087 …}
+reports: Doctrine\ORM\PersistentCollection {#4085 …}
+favourites: Doctrine\ORM\PersistentCollection {#4084 …}
+notifications: Doctrine\ORM\PersistentCollection {#4075 …}
-id: 260001
-bodyTs: "'correct':23 'disk':14 'drive':2,9 'drop':15 'rememb':22 'slow':10 'smr':8 'speed':12 'symptom':19 'write':11 'zpool':5"
+ranking: 0
+commentCount: 0
+upVotes: 0
+downVotes: 0
+visibility: "visible "
+apId: "https://lemmy.ml/comment/6926204"
+editedAt: null
+createdAt: DateTimeImmutable @1703787810 {#4098
date: 2023-12-28 19:23:30.0 +01:00
}
}
+root: App\Entity\EntryComment {#4094}
+body: "I don’t think they make SMR drives that big"
+lang: "en"
+isAdult: false
+favouriteCount: 3
+score: 0
+lastActive: DateTime @1703789585 {#4552
date: 2023-12-28 19:53:05.0 +01:00
}
+ip: null
+tags: null
+mentions: [
"@isles@lemmy.world"
"@martini1992@lemmy.ml"
]
+children: Doctrine\ORM\PersistentCollection {#4556 …}
+nested: Doctrine\ORM\PersistentCollection {#4558 …}
+votes: Doctrine\ORM\PersistentCollection {#4560 …}
+reports: Doctrine\ORM\PersistentCollection {#4562 …}
+favourites: Doctrine\ORM\PersistentCollection {#4564 …}
+notifications: Doctrine\ORM\PersistentCollection {#4566 …}
-id: 260073
-bodyTs: "'big':10 'drive':8 'make':6 'smr':7 'think':4"
+ranking: 0
+commentCount: 0
+upVotes: 0
+downVotes: 0
+visibility: "visible "
+apId: "https://thelemmy.club/comment/6473299"
+editedAt: null
+createdAt: DateTimeImmutable @1703789585 {#4553
date: 2023-12-28 19:53:05.0 +01:00
}
} |
|
Show voter details
|
25 |
DENIED
|
ROLE_USER
|
null |
|
Show voter details
|
26 |
DENIED
|
moderate
|
App\Entity\EntryComment {#4162
+user: App\Entity\User {#4175 …}
+entry: App\Entity\Entry {#2412
+user: Proxies\__CG__\App\Entity\User {#1970 …}
+magazine: App\Entity\Magazine {#266
+icon: Proxies\__CG__\App\Entity\Image {#247 …}
+name: "selfhosted@lemmy.world"
+title: "selfhosted"
+description: """
A place to share alternatives to popular online services that can be self-hosted without giving up privacy or locking you into a service you don’t control.\n
\n
Rules:\n
\n
- Be civil: we’re here to support and learn from one another. Insults won’t be tolerated. Flame wars are frowned upon.\n
- No spam posting.\n
- Don’t duplicate the full text of your blog or github here. Just post the link for folks to click.\n
- Submission headline should match the article title (don’t cherry-pick information from the title to fit your agenda).\n
- No trolling.\n
\n
Resources:\n
\n
- [awesome-selfhosted software](https://github.com/awesome-selfhosted/awesome-selfhosted)\n
- [awesome-sysadmin](https://github.com/awesome-foss/awesome-sysadmin) resources\n
- [Self-Hosted Podcast from Jupiter Broadcasting](https://selfhosted.show)\n
\n
> Any issues on the community? Report it using the report flag.\n
\n
> Questions? DM the mods!
"""
+rules: null
+subscriptionsCount: 1
+entryCount: 222
+entryCommentCount: 3916
+postCount: 0
+postCommentCount: 0
+isAdult: false
+customCss: null
+lastActive: DateTime @1729582735 {#276
date: 2024-10-22 09:38:55.0 +02:00
}
+markedForDeletionAt: null
+tags: null
+moderators: Doctrine\ORM\PersistentCollection {#238 …}
+ownershipRequests: Doctrine\ORM\PersistentCollection {#234 …}
+moderatorRequests: Doctrine\ORM\PersistentCollection {#223 …}
+entries: Doctrine\ORM\PersistentCollection {#181 …}
+posts: Doctrine\ORM\PersistentCollection {#139 …}
+subscriptions: Doctrine\ORM\PersistentCollection {#201 …}
+bans: Doctrine\ORM\PersistentCollection {#118 …}
+reports: Doctrine\ORM\PersistentCollection {#104 …}
+badges: Doctrine\ORM\PersistentCollection {#82 …}
+logs: Doctrine\ORM\PersistentCollection {#72 …}
+awards: Doctrine\ORM\PersistentCollection {#61 …}
+categories: Doctrine\ORM\PersistentCollection {#1820 …}
-id: 120
+apId: "selfhosted@lemmy.world"
+apProfileId: "https://lemmy.world/c/selfhosted"
+apPublicUrl: "https://lemmy.world/c/selfhosted"
+apFollowersUrl: "https://lemmy.world/c/selfhosted/followers"
+apInboxUrl: "https://lemmy.world/inbox"
+apDomain: "lemmy.world"
+apPreferredUsername: "selfhosted"
+apDiscoverable: true
+apManuallyApprovesFollowers: null
+privateKey: null
+publicKey: null
+apFetchedAt: DateTime @1703473826 {#270
date: 2023-12-25 04:10:26.0 +01:00
}
+apDeletedAt: null
+apTimeoutAt: null
+visibility: "visible "
+createdAt: DateTimeImmutable @1703473826 {#272
date: 2023-12-25 04:10:26.0 +01:00
}
}
+image: null
+domain: Proxies\__CG__\App\Entity\Domain {#1915 …}
+slug: "Question-ZFS-and-rsync"
+title: "Question - ZFS and rsync"
+url: null
+body: """
Hey fellow Selfhosters! I need some help, I think, and searching isn’t yielding what I’m hoping for.\n
\n
I recently built a new NAS for my network with 4x 18TB drives in a ZFS raidz1 pool. I previously have been using an external USB 12TB harddrive attached to a different machine.\n
\n
I’ve been attempting to use rsync to get the 12TB drive copied over to the new pool and things go great for the first 30-45 minutes. At that point, the current copy speed diminishes and 4 current files in progress sit at 100% done. Eventually, I’ve had to reboot the machine, because the zpool doesn’t appear accessible any longer. After reboot, the pool appears fine, no faults, and I can resume rsync for a while.\n
\n
EDIT: Of note, the rsync process seems to stall and I can’t get it to respect SIGINT or Ctrl+C. I can SSH in separately and running `zpool status` hangs with no output.\n
\n
While the workaround seems to be partially successful, the point of using rsync is to make it fairly hands-free and it’s been a week long process to copy the 3TB that I have now. I don’t think my zpool should be disappearing like that! Makes me nervous about the long-term viability. I don’t think I’m ready to drop down on Unraid.\n
\n
rsync is being initiated from the NAS to copy from the old server, am I better off “pushing” than “pulling”? I can’t imagine it’d make much difference.\n
\n
Could my drives be bad? How could I tell? They’re attached to a 10 port SATA card, could that be defective? How would I tell?\n
\n
Thanks for any help! I’ve dabbled in linux for a long time, but I’m far from proficient, so I don’t really know the intricacies of dmesg et al.
"""
+type: "article"
+lang: "en"
+isOc: false
+hasEmbed: false
+commentCount: 17
+favouriteCount: 36
+score: 0
+isAdult: false
+sticky: false
+lastActive: DateTime @1711558563 {#2410
date: 2024-03-27 17:56:03.0 +01:00
}
+ip: null
+adaAmount: 0
+tags: null
+mentions: null
+comments: Doctrine\ORM\PersistentCollection {#1884 …}
+votes: Doctrine\ORM\PersistentCollection {#1973 …}
+reports: Doctrine\ORM\PersistentCollection {#1959 …}
+favourites: Doctrine\ORM\PersistentCollection {#1927 …}
+notifications: Doctrine\ORM\PersistentCollection {#2442 …}
+badges: Doctrine\ORM\PersistentCollection {#2440 …}
+children: []
-id: 25403
-titleTs: "'question':1 'rsync':4 'zfs':2"
-bodyTs: "'-45':79 '10':278 '100':97 '12tb':46,63 '18tb':31 '30':78 '3tb':198 '4':90 '4x':30 'access':113 'al':320 'appear':112,120 'attach':48,275 'attempt':56 'bad':268 'better':250 'built':22 'c':152 'card':281 'copi':65,86,196,243 'could':264,270,282 'ctrl':151 'current':85,91 'd':260 'dabbl':296 'defect':285 'differ':51,263 'diminish':88 'disappear':211 'dmesg':318 'doesn':110 'done':98 'drive':32,64,266 'drop':231 'edit':132 'et':319 'eventu':99 'extern':44 'fair':183 'far':306 'fault':123 'fellow':2 'file':92 'fine':121 'first':77 'free':186 'get':61,145 'go':73 'great':74 'hand':185 'hands-fre':184 'hang':162 'harddriv':47 'help':7,293 'hey':1 'hope':18 'imagin':258 'initi':238 'intricaci':316 'isn':12 'know':314 'like':212 'linux':298 'long':193,220,301 'long-term':219 'longer':115 'm':17,228,305 'machin':52,106 'make':181,214,261 'minut':80 'much':262 'nas':25,241 'need':5 'nervous':216 'network':28 'new':24,69 'note':134 'old':246 'output':165 'partial':172 'point':83,175 'pool':37,70,119 'port':279 'previous':39 'process':137,194 'profici':308 'progress':94 'pull':254 'push':252 'raidz1':36 're':274 'readi':229 'realli':313 'reboot':104,117 'recent':21 'respect':148 'resum':127 'rsync':59,128,136,178,235 'run':159 'sata':280 'search':11 'seem':138,169 'selfhost':3 'separ':157 'server':247 'sigint':149 'sit':95 'speed':87 'ssh':155 'stall':140 'status':161 'success':173 'tell':272,289 'term':221 'thank':290 'thing':72 'think':9,206,226 'time':302 'unraid':234 'usb':45 'use':42,58,177 've':54,101,295 'viabil':222 'week':192 'workaround':168 'would':287 'yield':14 'zfs':35 'zpool':109,160,208"
+cross: false
+upVotes: 0
+downVotes: 0
+ranking: 1703862554
+visibility: "visible "
+apId: "https://lemmy.world/post/10061829"
+editedAt: DateTimeImmutable @1708796667 {#1850
date: 2024-02-24 18:44:27.0 +01:00
}
+createdAt: DateTimeImmutable @1703776154 {#2420
date: 2023-12-28 16:09:14.0 +01:00
}
}
+magazine: App\Entity\Magazine {#266}
+image: null
+parent: null
+root: null
+body: "Have you tried running it overnight to make sure its not just a performance thing?"
+lang: "en"
+isAdult: false
+favouriteCount: 2
+score: 0
+lastActive: DateTime @1711281640 {#4157
date: 2024-03-24 13:00:40.0 +01:00
}
+ip: null
+tags: null
+mentions: [
"@isles@lemmy.world"
]
+children: Doctrine\ORM\PersistentCollection {#4163 …}
+nested: Doctrine\ORM\PersistentCollection {#4165 …}
+votes: Doctrine\ORM\PersistentCollection {#4167 …}
+reports: Doctrine\ORM\PersistentCollection {#4169 …}
+favourites: Doctrine\ORM\PersistentCollection {#4171 …}
+notifications: Doctrine\ORM\PersistentCollection {#4173 …}
-id: 260990
-bodyTs: "'make':8 'overnight':6 'perform':14 'run':4 'sure':9 'thing':15 'tri':3"
+ranking: 0
+commentCount: 0
+upVotes: 0
+downVotes: 0
+visibility: "visible "
+apId: "https://lemmy.zip/comment/5828995"
+editedAt: null
+createdAt: DateTimeImmutable @1703810188 {#4158
date: 2023-12-29 01:36:28.0 +01:00
}
} |
|
Show voter details
|
27 |
DENIED
|
edit
|
App\Entity\EntryComment {#4162
+user: App\Entity\User {#4175 …}
+entry: App\Entity\Entry {#2412
+user: Proxies\__CG__\App\Entity\User {#1970 …}
+magazine: App\Entity\Magazine {#266
+icon: Proxies\__CG__\App\Entity\Image {#247 …}
+name: "selfhosted@lemmy.world"
+title: "selfhosted"
+description: """
A place to share alternatives to popular online services that can be self-hosted without giving up privacy or locking you into a service you don’t control.\n
\n
Rules:\n
\n
- Be civil: we’re here to support and learn from one another. Insults won’t be tolerated. Flame wars are frowned upon.\n
- No spam posting.\n
- Don’t duplicate the full text of your blog or github here. Just post the link for folks to click.\n
- Submission headline should match the article title (don’t cherry-pick information from the title to fit your agenda).\n
- No trolling.\n
\n
Resources:\n
\n
- [awesome-selfhosted software](https://github.com/awesome-selfhosted/awesome-selfhosted)\n
- [awesome-sysadmin](https://github.com/awesome-foss/awesome-sysadmin) resources\n
- [Self-Hosted Podcast from Jupiter Broadcasting](https://selfhosted.show)\n
\n
> Any issues on the community? Report it using the report flag.\n
\n
> Questions? DM the mods!
"""
+rules: null
+subscriptionsCount: 1
+entryCount: 222
+entryCommentCount: 3916
+postCount: 0
+postCommentCount: 0
+isAdult: false
+customCss: null
+lastActive: DateTime @1729582735 {#276
date: 2024-10-22 09:38:55.0 +02:00
}
+markedForDeletionAt: null
+tags: null
+moderators: Doctrine\ORM\PersistentCollection {#238 …}
+ownershipRequests: Doctrine\ORM\PersistentCollection {#234 …}
+moderatorRequests: Doctrine\ORM\PersistentCollection {#223 …}
+entries: Doctrine\ORM\PersistentCollection {#181 …}
+posts: Doctrine\ORM\PersistentCollection {#139 …}
+subscriptions: Doctrine\ORM\PersistentCollection {#201 …}
+bans: Doctrine\ORM\PersistentCollection {#118 …}
+reports: Doctrine\ORM\PersistentCollection {#104 …}
+badges: Doctrine\ORM\PersistentCollection {#82 …}
+logs: Doctrine\ORM\PersistentCollection {#72 …}
+awards: Doctrine\ORM\PersistentCollection {#61 …}
+categories: Doctrine\ORM\PersistentCollection {#1820 …}
-id: 120
+apId: "selfhosted@lemmy.world"
+apProfileId: "https://lemmy.world/c/selfhosted"
+apPublicUrl: "https://lemmy.world/c/selfhosted"
+apFollowersUrl: "https://lemmy.world/c/selfhosted/followers"
+apInboxUrl: "https://lemmy.world/inbox"
+apDomain: "lemmy.world"
+apPreferredUsername: "selfhosted"
+apDiscoverable: true
+apManuallyApprovesFollowers: null
+privateKey: null
+publicKey: null
+apFetchedAt: DateTime @1703473826 {#270
date: 2023-12-25 04:10:26.0 +01:00
}
+apDeletedAt: null
+apTimeoutAt: null
+visibility: "visible "
+createdAt: DateTimeImmutable @1703473826 {#272
date: 2023-12-25 04:10:26.0 +01:00
}
}
+image: null
+domain: Proxies\__CG__\App\Entity\Domain {#1915 …}
+slug: "Question-ZFS-and-rsync"
+title: "Question - ZFS and rsync"
+url: null
+body: """
Hey fellow Selfhosters! I need some help, I think, and searching isn’t yielding what I’m hoping for.\n
\n
I recently built a new NAS for my network with 4x 18TB drives in a ZFS raidz1 pool. I previously have been using an external USB 12TB harddrive attached to a different machine.\n
\n
I’ve been attempting to use rsync to get the 12TB drive copied over to the new pool and things go great for the first 30-45 minutes. At that point, the current copy speed diminishes and 4 current files in progress sit at 100% done. Eventually, I’ve had to reboot the machine, because the zpool doesn’t appear accessible any longer. After reboot, the pool appears fine, no faults, and I can resume rsync for a while.\n
\n
EDIT: Of note, the rsync process seems to stall and I can’t get it to respect SIGINT or Ctrl+C. I can SSH in separately and running `zpool status` hangs with no output.\n
\n
While the workaround seems to be partially successful, the point of using rsync is to make it fairly hands-free and it’s been a week long process to copy the 3TB that I have now. I don’t think my zpool should be disappearing like that! Makes me nervous about the long-term viability. I don’t think I’m ready to drop down on Unraid.\n
\n
rsync is being initiated from the NAS to copy from the old server, am I better off “pushing” than “pulling”? I can’t imagine it’d make much difference.\n
\n
Could my drives be bad? How could I tell? They’re attached to a 10 port SATA card, could that be defective? How would I tell?\n
\n
Thanks for any help! I’ve dabbled in linux for a long time, but I’m far from proficient, so I don’t really know the intricacies of dmesg et al.
"""
+type: "article"
+lang: "en"
+isOc: false
+hasEmbed: false
+commentCount: 17
+favouriteCount: 36
+score: 0
+isAdult: false
+sticky: false
+lastActive: DateTime @1711558563 {#2410
date: 2024-03-27 17:56:03.0 +01:00
}
+ip: null
+adaAmount: 0
+tags: null
+mentions: null
+comments: Doctrine\ORM\PersistentCollection {#1884 …}
+votes: Doctrine\ORM\PersistentCollection {#1973 …}
+reports: Doctrine\ORM\PersistentCollection {#1959 …}
+favourites: Doctrine\ORM\PersistentCollection {#1927 …}
+notifications: Doctrine\ORM\PersistentCollection {#2442 …}
+badges: Doctrine\ORM\PersistentCollection {#2440 …}
+children: []
-id: 25403
-titleTs: "'question':1 'rsync':4 'zfs':2"
-bodyTs: "'-45':79 '10':278 '100':97 '12tb':46,63 '18tb':31 '30':78 '3tb':198 '4':90 '4x':30 'access':113 'al':320 'appear':112,120 'attach':48,275 'attempt':56 'bad':268 'better':250 'built':22 'c':152 'card':281 'copi':65,86,196,243 'could':264,270,282 'ctrl':151 'current':85,91 'd':260 'dabbl':296 'defect':285 'differ':51,263 'diminish':88 'disappear':211 'dmesg':318 'doesn':110 'done':98 'drive':32,64,266 'drop':231 'edit':132 'et':319 'eventu':99 'extern':44 'fair':183 'far':306 'fault':123 'fellow':2 'file':92 'fine':121 'first':77 'free':186 'get':61,145 'go':73 'great':74 'hand':185 'hands-fre':184 'hang':162 'harddriv':47 'help':7,293 'hey':1 'hope':18 'imagin':258 'initi':238 'intricaci':316 'isn':12 'know':314 'like':212 'linux':298 'long':193,220,301 'long-term':219 'longer':115 'm':17,228,305 'machin':52,106 'make':181,214,261 'minut':80 'much':262 'nas':25,241 'need':5 'nervous':216 'network':28 'new':24,69 'note':134 'old':246 'output':165 'partial':172 'point':83,175 'pool':37,70,119 'port':279 'previous':39 'process':137,194 'profici':308 'progress':94 'pull':254 'push':252 'raidz1':36 're':274 'readi':229 'realli':313 'reboot':104,117 'recent':21 'respect':148 'resum':127 'rsync':59,128,136,178,235 'run':159 'sata':280 'search':11 'seem':138,169 'selfhost':3 'separ':157 'server':247 'sigint':149 'sit':95 'speed':87 'ssh':155 'stall':140 'status':161 'success':173 'tell':272,289 'term':221 'thank':290 'thing':72 'think':9,206,226 'time':302 'unraid':234 'usb':45 'use':42,58,177 've':54,101,295 'viabil':222 'week':192 'workaround':168 'would':287 'yield':14 'zfs':35 'zpool':109,160,208"
+cross: false
+upVotes: 0
+downVotes: 0
+ranking: 1703862554
+visibility: "visible "
+apId: "https://lemmy.world/post/10061829"
+editedAt: DateTimeImmutable @1708796667 {#1850
date: 2024-02-24 18:44:27.0 +01:00
}
+createdAt: DateTimeImmutable @1703776154 {#2420
date: 2023-12-28 16:09:14.0 +01:00
}
}
+magazine: App\Entity\Magazine {#266}
+image: null
+parent: null
+root: null
+body: "Have you tried running it overnight to make sure its not just a performance thing?"
+lang: "en"
+isAdult: false
+favouriteCount: 2
+score: 0
+lastActive: DateTime @1711281640 {#4157
date: 2024-03-24 13:00:40.0 +01:00
}
+ip: null
+tags: null
+mentions: [
"@isles@lemmy.world"
]
+children: Doctrine\ORM\PersistentCollection {#4163 …}
+nested: Doctrine\ORM\PersistentCollection {#4165 …}
+votes: Doctrine\ORM\PersistentCollection {#4167 …}
+reports: Doctrine\ORM\PersistentCollection {#4169 …}
+favourites: Doctrine\ORM\PersistentCollection {#4171 …}
+notifications: Doctrine\ORM\PersistentCollection {#4173 …}
-id: 260990
-bodyTs: "'make':8 'overnight':6 'perform':14 'run':4 'sure':9 'thing':15 'tri':3"
+ranking: 0
+commentCount: 0
+upVotes: 0
+downVotes: 0
+visibility: "visible "
+apId: "https://lemmy.zip/comment/5828995"
+editedAt: null
+createdAt: DateTimeImmutable @1703810188 {#4158
date: 2023-12-29 01:36:28.0 +01:00
}
} |
|
Show voter details
|
28 |
DENIED
|
moderate
|
App\Entity\EntryComment {#4162
+user: App\Entity\User {#4175 …}
+entry: App\Entity\Entry {#2412
+user: Proxies\__CG__\App\Entity\User {#1970 …}
+magazine: App\Entity\Magazine {#266
+icon: Proxies\__CG__\App\Entity\Image {#247 …}
+name: "selfhosted@lemmy.world"
+title: "selfhosted"
+description: """
A place to share alternatives to popular online services that can be self-hosted without giving up privacy or locking you into a service you don’t control.\n
\n
Rules:\n
\n
- Be civil: we’re here to support and learn from one another. Insults won’t be tolerated. Flame wars are frowned upon.\n
- No spam posting.\n
- Don’t duplicate the full text of your blog or github here. Just post the link for folks to click.\n
- Submission headline should match the article title (don’t cherry-pick information from the title to fit your agenda).\n
- No trolling.\n
\n
Resources:\n
\n
- [awesome-selfhosted software](https://github.com/awesome-selfhosted/awesome-selfhosted)\n
- [awesome-sysadmin](https://github.com/awesome-foss/awesome-sysadmin) resources\n
- [Self-Hosted Podcast from Jupiter Broadcasting](https://selfhosted.show)\n
\n
> Any issues on the community? Report it using the report flag.\n
\n
> Questions? DM the mods!
"""
+rules: null
+subscriptionsCount: 1
+entryCount: 222
+entryCommentCount: 3916
+postCount: 0
+postCommentCount: 0
+isAdult: false
+customCss: null
+lastActive: DateTime @1729582735 {#276
date: 2024-10-22 09:38:55.0 +02:00
}
+markedForDeletionAt: null
+tags: null
+moderators: Doctrine\ORM\PersistentCollection {#238 …}
+ownershipRequests: Doctrine\ORM\PersistentCollection {#234 …}
+moderatorRequests: Doctrine\ORM\PersistentCollection {#223 …}
+entries: Doctrine\ORM\PersistentCollection {#181 …}
+posts: Doctrine\ORM\PersistentCollection {#139 …}
+subscriptions: Doctrine\ORM\PersistentCollection {#201 …}
+bans: Doctrine\ORM\PersistentCollection {#118 …}
+reports: Doctrine\ORM\PersistentCollection {#104 …}
+badges: Doctrine\ORM\PersistentCollection {#82 …}
+logs: Doctrine\ORM\PersistentCollection {#72 …}
+awards: Doctrine\ORM\PersistentCollection {#61 …}
+categories: Doctrine\ORM\PersistentCollection {#1820 …}
-id: 120
+apId: "selfhosted@lemmy.world"
+apProfileId: "https://lemmy.world/c/selfhosted"
+apPublicUrl: "https://lemmy.world/c/selfhosted"
+apFollowersUrl: "https://lemmy.world/c/selfhosted/followers"
+apInboxUrl: "https://lemmy.world/inbox"
+apDomain: "lemmy.world"
+apPreferredUsername: "selfhosted"
+apDiscoverable: true
+apManuallyApprovesFollowers: null
+privateKey: null
+publicKey: null
+apFetchedAt: DateTime @1703473826 {#270
date: 2023-12-25 04:10:26.0 +01:00
}
+apDeletedAt: null
+apTimeoutAt: null
+visibility: "visible "
+createdAt: DateTimeImmutable @1703473826 {#272
date: 2023-12-25 04:10:26.0 +01:00
}
}
+image: null
+domain: Proxies\__CG__\App\Entity\Domain {#1915 …}
+slug: "Question-ZFS-and-rsync"
+title: "Question - ZFS and rsync"
+url: null
+body: """
Hey fellow Selfhosters! I need some help, I think, and searching isn’t yielding what I’m hoping for.\n
\n
I recently built a new NAS for my network with 4x 18TB drives in a ZFS raidz1 pool. I previously have been using an external USB 12TB harddrive attached to a different machine.\n
\n
I’ve been attempting to use rsync to get the 12TB drive copied over to the new pool and things go great for the first 30-45 minutes. At that point, the current copy speed diminishes and 4 current files in progress sit at 100% done. Eventually, I’ve had to reboot the machine, because the zpool doesn’t appear accessible any longer. After reboot, the pool appears fine, no faults, and I can resume rsync for a while.\n
\n
EDIT: Of note, the rsync process seems to stall and I can’t get it to respect SIGINT or Ctrl+C. I can SSH in separately and running `zpool status` hangs with no output.\n
\n
While the workaround seems to be partially successful, the point of using rsync is to make it fairly hands-free and it’s been a week long process to copy the 3TB that I have now. I don’t think my zpool should be disappearing like that! Makes me nervous about the long-term viability. I don’t think I’m ready to drop down on Unraid.\n
\n
rsync is being initiated from the NAS to copy from the old server, am I better off “pushing” than “pulling”? I can’t imagine it’d make much difference.\n
\n
Could my drives be bad? How could I tell? They’re attached to a 10 port SATA card, could that be defective? How would I tell?\n
\n
Thanks for any help! I’ve dabbled in linux for a long time, but I’m far from proficient, so I don’t really know the intricacies of dmesg et al.
"""
+type: "article"
+lang: "en"
+isOc: false
+hasEmbed: false
+commentCount: 17
+favouriteCount: 36
+score: 0
+isAdult: false
+sticky: false
+lastActive: DateTime @1711558563 {#2410
date: 2024-03-27 17:56:03.0 +01:00
}
+ip: null
+adaAmount: 0
+tags: null
+mentions: null
+comments: Doctrine\ORM\PersistentCollection {#1884 …}
+votes: Doctrine\ORM\PersistentCollection {#1973 …}
+reports: Doctrine\ORM\PersistentCollection {#1959 …}
+favourites: Doctrine\ORM\PersistentCollection {#1927 …}
+notifications: Doctrine\ORM\PersistentCollection {#2442 …}
+badges: Doctrine\ORM\PersistentCollection {#2440 …}
+children: []
-id: 25403
-titleTs: "'question':1 'rsync':4 'zfs':2"
-bodyTs: "'-45':79 '10':278 '100':97 '12tb':46,63 '18tb':31 '30':78 '3tb':198 '4':90 '4x':30 'access':113 'al':320 'appear':112,120 'attach':48,275 'attempt':56 'bad':268 'better':250 'built':22 'c':152 'card':281 'copi':65,86,196,243 'could':264,270,282 'ctrl':151 'current':85,91 'd':260 'dabbl':296 'defect':285 'differ':51,263 'diminish':88 'disappear':211 'dmesg':318 'doesn':110 'done':98 'drive':32,64,266 'drop':231 'edit':132 'et':319 'eventu':99 'extern':44 'fair':183 'far':306 'fault':123 'fellow':2 'file':92 'fine':121 'first':77 'free':186 'get':61,145 'go':73 'great':74 'hand':185 'hands-fre':184 'hang':162 'harddriv':47 'help':7,293 'hey':1 'hope':18 'imagin':258 'initi':238 'intricaci':316 'isn':12 'know':314 'like':212 'linux':298 'long':193,220,301 'long-term':219 'longer':115 'm':17,228,305 'machin':52,106 'make':181,214,261 'minut':80 'much':262 'nas':25,241 'need':5 'nervous':216 'network':28 'new':24,69 'note':134 'old':246 'output':165 'partial':172 'point':83,175 'pool':37,70,119 'port':279 'previous':39 'process':137,194 'profici':308 'progress':94 'pull':254 'push':252 'raidz1':36 're':274 'readi':229 'realli':313 'reboot':104,117 'recent':21 'respect':148 'resum':127 'rsync':59,128,136,178,235 'run':159 'sata':280 'search':11 'seem':138,169 'selfhost':3 'separ':157 'server':247 'sigint':149 'sit':95 'speed':87 'ssh':155 'stall':140 'status':161 'success':173 'tell':272,289 'term':221 'thank':290 'thing':72 'think':9,206,226 'time':302 'unraid':234 'usb':45 'use':42,58,177 've':54,101,295 'viabil':222 'week':192 'workaround':168 'would':287 'yield':14 'zfs':35 'zpool':109,160,208"
+cross: false
+upVotes: 0
+downVotes: 0
+ranking: 1703862554
+visibility: "visible "
+apId: "https://lemmy.world/post/10061829"
+editedAt: DateTimeImmutable @1708796667 {#1850
date: 2024-02-24 18:44:27.0 +01:00
}
+createdAt: DateTimeImmutable @1703776154 {#2420
date: 2023-12-28 16:09:14.0 +01:00
}
}
+magazine: App\Entity\Magazine {#266}
+image: null
+parent: null
+root: null
+body: "Have you tried running it overnight to make sure its not just a performance thing?"
+lang: "en"
+isAdult: false
+favouriteCount: 2
+score: 0
+lastActive: DateTime @1711281640 {#4157
date: 2024-03-24 13:00:40.0 +01:00
}
+ip: null
+tags: null
+mentions: [
"@isles@lemmy.world"
]
+children: Doctrine\ORM\PersistentCollection {#4163 …}
+nested: Doctrine\ORM\PersistentCollection {#4165 …}
+votes: Doctrine\ORM\PersistentCollection {#4167 …}
+reports: Doctrine\ORM\PersistentCollection {#4169 …}
+favourites: Doctrine\ORM\PersistentCollection {#4171 …}
+notifications: Doctrine\ORM\PersistentCollection {#4173 …}
-id: 260990
-bodyTs: "'make':8 'overnight':6 'perform':14 'run':4 'sure':9 'thing':15 'tri':3"
+ranking: 0
+commentCount: 0
+upVotes: 0
+downVotes: 0
+visibility: "visible "
+apId: "https://lemmy.zip/comment/5828995"
+editedAt: null
+createdAt: DateTimeImmutable @1703810188 {#4158
date: 2023-12-29 01:36:28.0 +01:00
}
} |
|
Show voter details
|
29 |
DENIED
|
ROLE_USER
|
null |
|
Show voter details
|
30 |
DENIED
|
moderate
|
App\Entity\EntryComment {#4586
+user: Proxies\__CG__\App\Entity\User {#1970 …}
+entry: App\Entity\Entry {#2412
+user: Proxies\__CG__\App\Entity\User {#1970 …}
+magazine: App\Entity\Magazine {#266
+icon: Proxies\__CG__\App\Entity\Image {#247 …}
+name: "selfhosted@lemmy.world"
+title: "selfhosted"
+description: """
A place to share alternatives to popular online services that can be self-hosted without giving up privacy or locking you into a service you don’t control.\n
\n
Rules:\n
\n
- Be civil: we’re here to support and learn from one another. Insults won’t be tolerated. Flame wars are frowned upon.\n
- No spam posting.\n
- Don’t duplicate the full text of your blog or github here. Just post the link for folks to click.\n
- Submission headline should match the article title (don’t cherry-pick information from the title to fit your agenda).\n
- No trolling.\n
\n
Resources:\n
\n
- [awesome-selfhosted software](https://github.com/awesome-selfhosted/awesome-selfhosted)\n
- [awesome-sysadmin](https://github.com/awesome-foss/awesome-sysadmin) resources\n
- [Self-Hosted Podcast from Jupiter Broadcasting](https://selfhosted.show)\n
\n
> Any issues on the community? Report it using the report flag.\n
\n
> Questions? DM the mods!
"""
+rules: null
+subscriptionsCount: 1
+entryCount: 222
+entryCommentCount: 3916
+postCount: 0
+postCommentCount: 0
+isAdult: false
+customCss: null
+lastActive: DateTime @1729582735 {#276
date: 2024-10-22 09:38:55.0 +02:00
}
+markedForDeletionAt: null
+tags: null
+moderators: Doctrine\ORM\PersistentCollection {#238 …}
+ownershipRequests: Doctrine\ORM\PersistentCollection {#234 …}
+moderatorRequests: Doctrine\ORM\PersistentCollection {#223 …}
+entries: Doctrine\ORM\PersistentCollection {#181 …}
+posts: Doctrine\ORM\PersistentCollection {#139 …}
+subscriptions: Doctrine\ORM\PersistentCollection {#201 …}
+bans: Doctrine\ORM\PersistentCollection {#118 …}
+reports: Doctrine\ORM\PersistentCollection {#104 …}
+badges: Doctrine\ORM\PersistentCollection {#82 …}
+logs: Doctrine\ORM\PersistentCollection {#72 …}
+awards: Doctrine\ORM\PersistentCollection {#61 …}
+categories: Doctrine\ORM\PersistentCollection {#1820 …}
-id: 120
+apId: "selfhosted@lemmy.world"
+apProfileId: "https://lemmy.world/c/selfhosted"
+apPublicUrl: "https://lemmy.world/c/selfhosted"
+apFollowersUrl: "https://lemmy.world/c/selfhosted/followers"
+apInboxUrl: "https://lemmy.world/inbox"
+apDomain: "lemmy.world"
+apPreferredUsername: "selfhosted"
+apDiscoverable: true
+apManuallyApprovesFollowers: null
+privateKey: null
+publicKey: null
+apFetchedAt: DateTime @1703473826 {#270
date: 2023-12-25 04:10:26.0 +01:00
}
+apDeletedAt: null
+apTimeoutAt: null
+visibility: "visible "
+createdAt: DateTimeImmutable @1703473826 {#272
date: 2023-12-25 04:10:26.0 +01:00
}
}
+image: null
+domain: Proxies\__CG__\App\Entity\Domain {#1915 …}
+slug: "Question-ZFS-and-rsync"
+title: "Question - ZFS and rsync"
+url: null
+body: """
Hey fellow Selfhosters! I need some help, I think, and searching isn’t yielding what I’m hoping for.\n
\n
I recently built a new NAS for my network with 4x 18TB drives in a ZFS raidz1 pool. I previously have been using an external USB 12TB harddrive attached to a different machine.\n
\n
I’ve been attempting to use rsync to get the 12TB drive copied over to the new pool and things go great for the first 30-45 minutes. At that point, the current copy speed diminishes and 4 current files in progress sit at 100% done. Eventually, I’ve had to reboot the machine, because the zpool doesn’t appear accessible any longer. After reboot, the pool appears fine, no faults, and I can resume rsync for a while.\n
\n
EDIT: Of note, the rsync process seems to stall and I can’t get it to respect SIGINT or Ctrl+C. I can SSH in separately and running `zpool status` hangs with no output.\n
\n
While the workaround seems to be partially successful, the point of using rsync is to make it fairly hands-free and it’s been a week long process to copy the 3TB that I have now. I don’t think my zpool should be disappearing like that! Makes me nervous about the long-term viability. I don’t think I’m ready to drop down on Unraid.\n
\n
rsync is being initiated from the NAS to copy from the old server, am I better off “pushing” than “pulling”? I can’t imagine it’d make much difference.\n
\n
Could my drives be bad? How could I tell? They’re attached to a 10 port SATA card, could that be defective? How would I tell?\n
\n
Thanks for any help! I’ve dabbled in linux for a long time, but I’m far from proficient, so I don’t really know the intricacies of dmesg et al.
"""
+type: "article"
+lang: "en"
+isOc: false
+hasEmbed: false
+commentCount: 17
+favouriteCount: 36
+score: 0
+isAdult: false
+sticky: false
+lastActive: DateTime @1711558563 {#2410
date: 2024-03-27 17:56:03.0 +01:00
}
+ip: null
+adaAmount: 0
+tags: null
+mentions: null
+comments: Doctrine\ORM\PersistentCollection {#1884 …}
+votes: Doctrine\ORM\PersistentCollection {#1973 …}
+reports: Doctrine\ORM\PersistentCollection {#1959 …}
+favourites: Doctrine\ORM\PersistentCollection {#1927 …}
+notifications: Doctrine\ORM\PersistentCollection {#2442 …}
+badges: Doctrine\ORM\PersistentCollection {#2440 …}
+children: []
-id: 25403
-titleTs: "'question':1 'rsync':4 'zfs':2"
-bodyTs: "'-45':79 '10':278 '100':97 '12tb':46,63 '18tb':31 '30':78 '3tb':198 '4':90 '4x':30 'access':113 'al':320 'appear':112,120 'attach':48,275 'attempt':56 'bad':268 'better':250 'built':22 'c':152 'card':281 'copi':65,86,196,243 'could':264,270,282 'ctrl':151 'current':85,91 'd':260 'dabbl':296 'defect':285 'differ':51,263 'diminish':88 'disappear':211 'dmesg':318 'doesn':110 'done':98 'drive':32,64,266 'drop':231 'edit':132 'et':319 'eventu':99 'extern':44 'fair':183 'far':306 'fault':123 'fellow':2 'file':92 'fine':121 'first':77 'free':186 'get':61,145 'go':73 'great':74 'hand':185 'hands-fre':184 'hang':162 'harddriv':47 'help':7,293 'hey':1 'hope':18 'imagin':258 'initi':238 'intricaci':316 'isn':12 'know':314 'like':212 'linux':298 'long':193,220,301 'long-term':219 'longer':115 'm':17,228,305 'machin':52,106 'make':181,214,261 'minut':80 'much':262 'nas':25,241 'need':5 'nervous':216 'network':28 'new':24,69 'note':134 'old':246 'output':165 'partial':172 'point':83,175 'pool':37,70,119 'port':279 'previous':39 'process':137,194 'profici':308 'progress':94 'pull':254 'push':252 'raidz1':36 're':274 'readi':229 'realli':313 'reboot':104,117 'recent':21 'respect':148 'resum':127 'rsync':59,128,136,178,235 'run':159 'sata':280 'search':11 'seem':138,169 'selfhost':3 'separ':157 'server':247 'sigint':149 'sit':95 'speed':87 'ssh':155 'stall':140 'status':161 'success':173 'tell':272,289 'term':221 'thank':290 'thing':72 'think':9,206,226 'time':302 'unraid':234 'usb':45 'use':42,58,177 've':54,101,295 'viabil':222 'week':192 'workaround':168 'would':287 'yield':14 'zfs':35 'zpool':109,160,208"
+cross: false
+upVotes: 0
+downVotes: 0
+ranking: 1703862554
+visibility: "visible "
+apId: "https://lemmy.world/post/10061829"
+editedAt: DateTimeImmutable @1708796667 {#1850
date: 2024-02-24 18:44:27.0 +01:00
}
+createdAt: DateTimeImmutable @1703776154 {#2420
date: 2023-12-28 16:09:14.0 +01:00
}
}
+magazine: App\Entity\Magazine {#266}
+image: null
+parent: App\Entity\EntryComment {#4162
+user: App\Entity\User {#4175 …}
+entry: App\Entity\Entry {#2412}
+magazine: App\Entity\Magazine {#266}
+image: null
+parent: null
+root: null
+body: "Have you tried running it overnight to make sure its not just a performance thing?"
+lang: "en"
+isAdult: false
+favouriteCount: 2
+score: 0
+lastActive: DateTime @1711281640 {#4157
date: 2024-03-24 13:00:40.0 +01:00
}
+ip: null
+tags: null
+mentions: [
"@isles@lemmy.world"
]
+children: Doctrine\ORM\PersistentCollection {#4163 …}
+nested: Doctrine\ORM\PersistentCollection {#4165 …}
+votes: Doctrine\ORM\PersistentCollection {#4167 …}
+reports: Doctrine\ORM\PersistentCollection {#4169 …}
+favourites: Doctrine\ORM\PersistentCollection {#4171 …}
+notifications: Doctrine\ORM\PersistentCollection {#4173 …}
-id: 260990
-bodyTs: "'make':8 'overnight':6 'perform':14 'run':4 'sure':9 'thing':15 'tri':3"
+ranking: 0
+commentCount: 0
+upVotes: 0
+downVotes: 0
+visibility: "visible "
+apId: "https://lemmy.zip/comment/5828995"
+editedAt: null
+createdAt: DateTimeImmutable @1703810188 {#4158
date: 2023-12-29 01:36:28.0 +01:00
}
}
+root: App\Entity\EntryComment {#4162}
+body: "I did, great suggestion. It never recovered."
+lang: "en"
+isAdult: false
+favouriteCount: 0
+score: 0
+lastActive: DateTime @1704301331 {#4584
date: 2024-01-03 18:02:11.0 +01:00
}
+ip: null
+tags: null
+mentions: [
"@isles@lemmy.world"
"@possiblylinux127@lemmy.zip"
]
+children: Doctrine\ORM\PersistentCollection {#4587 …}
+nested: Doctrine\ORM\PersistentCollection {#4589 …}
+votes: Doctrine\ORM\PersistentCollection {#4591 …}
+reports: Doctrine\ORM\PersistentCollection {#4593 …}
+favourites: Doctrine\ORM\PersistentCollection {#4595 …}
+notifications: Doctrine\ORM\PersistentCollection {#4597 …}
-id: 276934
-bodyTs: "'great':3 'never':6 'recov':7 'suggest':4"
+ranking: 0
+commentCount: 0
+upVotes: 0
+downVotes: 0
+visibility: "visible "
+apId: "https://lemmy.world/comment/6383634"
+editedAt: null
+createdAt: DateTimeImmutable @1704301331 {#4585
date: 2024-01-03 18:02:11.0 +01:00
}
} |
|
Show voter details
|
31 |
DENIED
|
edit
|
App\Entity\EntryComment {#4586
+user: Proxies\__CG__\App\Entity\User {#1970 …}
+entry: App\Entity\Entry {#2412
+user: Proxies\__CG__\App\Entity\User {#1970 …}
+magazine: App\Entity\Magazine {#266
+icon: Proxies\__CG__\App\Entity\Image {#247 …}
+name: "selfhosted@lemmy.world"
+title: "selfhosted"
+description: """
A place to share alternatives to popular online services that can be self-hosted without giving up privacy or locking you into a service you don’t control.\n
\n
Rules:\n
\n
- Be civil: we’re here to support and learn from one another. Insults won’t be tolerated. Flame wars are frowned upon.\n
- No spam posting.\n
- Don’t duplicate the full text of your blog or github here. Just post the link for folks to click.\n
- Submission headline should match the article title (don’t cherry-pick information from the title to fit your agenda).\n
- No trolling.\n
\n
Resources:\n
\n
- [awesome-selfhosted software](https://github.com/awesome-selfhosted/awesome-selfhosted)\n
- [awesome-sysadmin](https://github.com/awesome-foss/awesome-sysadmin) resources\n
- [Self-Hosted Podcast from Jupiter Broadcasting](https://selfhosted.show)\n
\n
> Any issues on the community? Report it using the report flag.\n
\n
> Questions? DM the mods!
"""
+rules: null
+subscriptionsCount: 1
+entryCount: 222
+entryCommentCount: 3916
+postCount: 0
+postCommentCount: 0
+isAdult: false
+customCss: null
+lastActive: DateTime @1729582735 {#276
date: 2024-10-22 09:38:55.0 +02:00
}
+markedForDeletionAt: null
+tags: null
+moderators: Doctrine\ORM\PersistentCollection {#238 …}
+ownershipRequests: Doctrine\ORM\PersistentCollection {#234 …}
+moderatorRequests: Doctrine\ORM\PersistentCollection {#223 …}
+entries: Doctrine\ORM\PersistentCollection {#181 …}
+posts: Doctrine\ORM\PersistentCollection {#139 …}
+subscriptions: Doctrine\ORM\PersistentCollection {#201 …}
+bans: Doctrine\ORM\PersistentCollection {#118 …}
+reports: Doctrine\ORM\PersistentCollection {#104 …}
+badges: Doctrine\ORM\PersistentCollection {#82 …}
+logs: Doctrine\ORM\PersistentCollection {#72 …}
+awards: Doctrine\ORM\PersistentCollection {#61 …}
+categories: Doctrine\ORM\PersistentCollection {#1820 …}
-id: 120
+apId: "selfhosted@lemmy.world"
+apProfileId: "https://lemmy.world/c/selfhosted"
+apPublicUrl: "https://lemmy.world/c/selfhosted"
+apFollowersUrl: "https://lemmy.world/c/selfhosted/followers"
+apInboxUrl: "https://lemmy.world/inbox"
+apDomain: "lemmy.world"
+apPreferredUsername: "selfhosted"
+apDiscoverable: true
+apManuallyApprovesFollowers: null
+privateKey: null
+publicKey: null
+apFetchedAt: DateTime @1703473826 {#270
date: 2023-12-25 04:10:26.0 +01:00
}
+apDeletedAt: null
+apTimeoutAt: null
+visibility: "visible "
+createdAt: DateTimeImmutable @1703473826 {#272
date: 2023-12-25 04:10:26.0 +01:00
}
}
+image: null
+domain: Proxies\__CG__\App\Entity\Domain {#1915 …}
+slug: "Question-ZFS-and-rsync"
+title: "Question - ZFS and rsync"
+url: null
+body: """
Hey fellow Selfhosters! I need some help, I think, and searching isn’t yielding what I’m hoping for.\n
\n
I recently built a new NAS for my network with 4x 18TB drives in a ZFS raidz1 pool. I previously have been using an external USB 12TB harddrive attached to a different machine.\n
\n
I’ve been attempting to use rsync to get the 12TB drive copied over to the new pool and things go great for the first 30-45 minutes. At that point, the current copy speed diminishes and 4 current files in progress sit at 100% done. Eventually, I’ve had to reboot the machine, because the zpool doesn’t appear accessible any longer. After reboot, the pool appears fine, no faults, and I can resume rsync for a while.\n
\n
EDIT: Of note, the rsync process seems to stall and I can’t get it to respect SIGINT or Ctrl+C. I can SSH in separately and running `zpool status` hangs with no output.\n
\n
While the workaround seems to be partially successful, the point of using rsync is to make it fairly hands-free and it’s been a week long process to copy the 3TB that I have now. I don’t think my zpool should be disappearing like that! Makes me nervous about the long-term viability. I don’t think I’m ready to drop down on Unraid.\n
\n
rsync is being initiated from the NAS to copy from the old server, am I better off “pushing” than “pulling”? I can’t imagine it’d make much difference.\n
\n
Could my drives be bad? How could I tell? They’re attached to a 10 port SATA card, could that be defective? How would I tell?\n
\n
Thanks for any help! I’ve dabbled in linux for a long time, but I’m far from proficient, so I don’t really know the intricacies of dmesg et al.
"""
+type: "article"
+lang: "en"
+isOc: false
+hasEmbed: false
+commentCount: 17
+favouriteCount: 36
+score: 0
+isAdult: false
+sticky: false
+lastActive: DateTime @1711558563 {#2410
date: 2024-03-27 17:56:03.0 +01:00
}
+ip: null
+adaAmount: 0
+tags: null
+mentions: null
+comments: Doctrine\ORM\PersistentCollection {#1884 …}
+votes: Doctrine\ORM\PersistentCollection {#1973 …}
+reports: Doctrine\ORM\PersistentCollection {#1959 …}
+favourites: Doctrine\ORM\PersistentCollection {#1927 …}
+notifications: Doctrine\ORM\PersistentCollection {#2442 …}
+badges: Doctrine\ORM\PersistentCollection {#2440 …}
+children: []
-id: 25403
-titleTs: "'question':1 'rsync':4 'zfs':2"
-bodyTs: "'-45':79 '10':278 '100':97 '12tb':46,63 '18tb':31 '30':78 '3tb':198 '4':90 '4x':30 'access':113 'al':320 'appear':112,120 'attach':48,275 'attempt':56 'bad':268 'better':250 'built':22 'c':152 'card':281 'copi':65,86,196,243 'could':264,270,282 'ctrl':151 'current':85,91 'd':260 'dabbl':296 'defect':285 'differ':51,263 'diminish':88 'disappear':211 'dmesg':318 'doesn':110 'done':98 'drive':32,64,266 'drop':231 'edit':132 'et':319 'eventu':99 'extern':44 'fair':183 'far':306 'fault':123 'fellow':2 'file':92 'fine':121 'first':77 'free':186 'get':61,145 'go':73 'great':74 'hand':185 'hands-fre':184 'hang':162 'harddriv':47 'help':7,293 'hey':1 'hope':18 'imagin':258 'initi':238 'intricaci':316 'isn':12 'know':314 'like':212 'linux':298 'long':193,220,301 'long-term':219 'longer':115 'm':17,228,305 'machin':52,106 'make':181,214,261 'minut':80 'much':262 'nas':25,241 'need':5 'nervous':216 'network':28 'new':24,69 'note':134 'old':246 'output':165 'partial':172 'point':83,175 'pool':37,70,119 'port':279 'previous':39 'process':137,194 'profici':308 'progress':94 'pull':254 'push':252 'raidz1':36 're':274 'readi':229 'realli':313 'reboot':104,117 'recent':21 'respect':148 'resum':127 'rsync':59,128,136,178,235 'run':159 'sata':280 'search':11 'seem':138,169 'selfhost':3 'separ':157 'server':247 'sigint':149 'sit':95 'speed':87 'ssh':155 'stall':140 'status':161 'success':173 'tell':272,289 'term':221 'thank':290 'thing':72 'think':9,206,226 'time':302 'unraid':234 'usb':45 'use':42,58,177 've':54,101,295 'viabil':222 'week':192 'workaround':168 'would':287 'yield':14 'zfs':35 'zpool':109,160,208"
+cross: false
+upVotes: 0
+downVotes: 0
+ranking: 1703862554
+visibility: "visible "
+apId: "https://lemmy.world/post/10061829"
+editedAt: DateTimeImmutable @1708796667 {#1850
date: 2024-02-24 18:44:27.0 +01:00
}
+createdAt: DateTimeImmutable @1703776154 {#2420
date: 2023-12-28 16:09:14.0 +01:00
}
}
+magazine: App\Entity\Magazine {#266}
+image: null
+parent: App\Entity\EntryComment {#4162
+user: App\Entity\User {#4175 …}
+entry: App\Entity\Entry {#2412}
+magazine: App\Entity\Magazine {#266}
+image: null
+parent: null
+root: null
+body: "Have you tried running it overnight to make sure its not just a performance thing?"
+lang: "en"
+isAdult: false
+favouriteCount: 2
+score: 0
+lastActive: DateTime @1711281640 {#4157
date: 2024-03-24 13:00:40.0 +01:00
}
+ip: null
+tags: null
+mentions: [
"@isles@lemmy.world"
]
+children: Doctrine\ORM\PersistentCollection {#4163 …}
+nested: Doctrine\ORM\PersistentCollection {#4165 …}
+votes: Doctrine\ORM\PersistentCollection {#4167 …}
+reports: Doctrine\ORM\PersistentCollection {#4169 …}
+favourites: Doctrine\ORM\PersistentCollection {#4171 …}
+notifications: Doctrine\ORM\PersistentCollection {#4173 …}
-id: 260990
-bodyTs: "'make':8 'overnight':6 'perform':14 'run':4 'sure':9 'thing':15 'tri':3"
+ranking: 0
+commentCount: 0
+upVotes: 0
+downVotes: 0
+visibility: "visible "
+apId: "https://lemmy.zip/comment/5828995"
+editedAt: null
+createdAt: DateTimeImmutable @1703810188 {#4158
date: 2023-12-29 01:36:28.0 +01:00
}
}
+root: App\Entity\EntryComment {#4162}
+body: "I did, great suggestion. It never recovered."
+lang: "en"
+isAdult: false
+favouriteCount: 0
+score: 0
+lastActive: DateTime @1704301331 {#4584
date: 2024-01-03 18:02:11.0 +01:00
}
+ip: null
+tags: null
+mentions: [
"@isles@lemmy.world"
"@possiblylinux127@lemmy.zip"
]
+children: Doctrine\ORM\PersistentCollection {#4587 …}
+nested: Doctrine\ORM\PersistentCollection {#4589 …}
+votes: Doctrine\ORM\PersistentCollection {#4591 …}
+reports: Doctrine\ORM\PersistentCollection {#4593 …}
+favourites: Doctrine\ORM\PersistentCollection {#4595 …}
+notifications: Doctrine\ORM\PersistentCollection {#4597 …}
-id: 276934
-bodyTs: "'great':3 'never':6 'recov':7 'suggest':4"
+ranking: 0
+commentCount: 0
+upVotes: 0
+downVotes: 0
+visibility: "visible "
+apId: "https://lemmy.world/comment/6383634"
+editedAt: null
+createdAt: DateTimeImmutable @1704301331 {#4585
date: 2024-01-03 18:02:11.0 +01:00
}
} |
|
Show voter details
|
32 |
DENIED
|
moderate
|
App\Entity\EntryComment {#4586
+user: Proxies\__CG__\App\Entity\User {#1970 …}
+entry: App\Entity\Entry {#2412
+user: Proxies\__CG__\App\Entity\User {#1970 …}
+magazine: App\Entity\Magazine {#266
+icon: Proxies\__CG__\App\Entity\Image {#247 …}
+name: "selfhosted@lemmy.world"
+title: "selfhosted"
+description: """
A place to share alternatives to popular online services that can be self-hosted without giving up privacy or locking you into a service you don’t control.\n
\n
Rules:\n
\n
- Be civil: we’re here to support and learn from one another. Insults won’t be tolerated. Flame wars are frowned upon.\n
- No spam posting.\n
- Don’t duplicate the full text of your blog or github here. Just post the link for folks to click.\n
- Submission headline should match the article title (don’t cherry-pick information from the title to fit your agenda).\n
- No trolling.\n
\n
Resources:\n
\n
- [awesome-selfhosted software](https://github.com/awesome-selfhosted/awesome-selfhosted)\n
- [awesome-sysadmin](https://github.com/awesome-foss/awesome-sysadmin) resources\n
- [Self-Hosted Podcast from Jupiter Broadcasting](https://selfhosted.show)\n
\n
> Any issues on the community? Report it using the report flag.\n
\n
> Questions? DM the mods!
"""
+rules: null
+subscriptionsCount: 1
+entryCount: 222
+entryCommentCount: 3916
+postCount: 0
+postCommentCount: 0
+isAdult: false
+customCss: null
+lastActive: DateTime @1729582735 {#276
date: 2024-10-22 09:38:55.0 +02:00
}
+markedForDeletionAt: null
+tags: null
+moderators: Doctrine\ORM\PersistentCollection {#238 …}
+ownershipRequests: Doctrine\ORM\PersistentCollection {#234 …}
+moderatorRequests: Doctrine\ORM\PersistentCollection {#223 …}
+entries: Doctrine\ORM\PersistentCollection {#181 …}
+posts: Doctrine\ORM\PersistentCollection {#139 …}
+subscriptions: Doctrine\ORM\PersistentCollection {#201 …}
+bans: Doctrine\ORM\PersistentCollection {#118 …}
+reports: Doctrine\ORM\PersistentCollection {#104 …}
+badges: Doctrine\ORM\PersistentCollection {#82 …}
+logs: Doctrine\ORM\PersistentCollection {#72 …}
+awards: Doctrine\ORM\PersistentCollection {#61 …}
+categories: Doctrine\ORM\PersistentCollection {#1820 …}
-id: 120
+apId: "selfhosted@lemmy.world"
+apProfileId: "https://lemmy.world/c/selfhosted"
+apPublicUrl: "https://lemmy.world/c/selfhosted"
+apFollowersUrl: "https://lemmy.world/c/selfhosted/followers"
+apInboxUrl: "https://lemmy.world/inbox"
+apDomain: "lemmy.world"
+apPreferredUsername: "selfhosted"
+apDiscoverable: true
+apManuallyApprovesFollowers: null
+privateKey: null
+publicKey: null
+apFetchedAt: DateTime @1703473826 {#270
date: 2023-12-25 04:10:26.0 +01:00
}
+apDeletedAt: null
+apTimeoutAt: null
+visibility: "visible "
+createdAt: DateTimeImmutable @1703473826 {#272
date: 2023-12-25 04:10:26.0 +01:00
}
}
+image: null
+domain: Proxies\__CG__\App\Entity\Domain {#1915 …}
+slug: "Question-ZFS-and-rsync"
+title: "Question - ZFS and rsync"
+url: null
+body: """
Hey fellow Selfhosters! I need some help, I think, and searching isn’t yielding what I’m hoping for.\n
\n
I recently built a new NAS for my network with 4x 18TB drives in a ZFS raidz1 pool. I previously have been using an external USB 12TB harddrive attached to a different machine.\n
\n
I’ve been attempting to use rsync to get the 12TB drive copied over to the new pool and things go great for the first 30-45 minutes. At that point, the current copy speed diminishes and 4 current files in progress sit at 100% done. Eventually, I’ve had to reboot the machine, because the zpool doesn’t appear accessible any longer. After reboot, the pool appears fine, no faults, and I can resume rsync for a while.\n
\n
EDIT: Of note, the rsync process seems to stall and I can’t get it to respect SIGINT or Ctrl+C. I can SSH in separately and running `zpool status` hangs with no output.\n
\n
While the workaround seems to be partially successful, the point of using rsync is to make it fairly hands-free and it’s been a week long process to copy the 3TB that I have now. I don’t think my zpool should be disappearing like that! Makes me nervous about the long-term viability. I don’t think I’m ready to drop down on Unraid.\n
\n
rsync is being initiated from the NAS to copy from the old server, am I better off “pushing” than “pulling”? I can’t imagine it’d make much difference.\n
\n
Could my drives be bad? How could I tell? They’re attached to a 10 port SATA card, could that be defective? How would I tell?\n
\n
Thanks for any help! I’ve dabbled in linux for a long time, but I’m far from proficient, so I don’t really know the intricacies of dmesg et al.
"""
+type: "article"
+lang: "en"
+isOc: false
+hasEmbed: false
+commentCount: 17
+favouriteCount: 36
+score: 0
+isAdult: false
+sticky: false
+lastActive: DateTime @1711558563 {#2410
date: 2024-03-27 17:56:03.0 +01:00
}
+ip: null
+adaAmount: 0
+tags: null
+mentions: null
+comments: Doctrine\ORM\PersistentCollection {#1884 …}
+votes: Doctrine\ORM\PersistentCollection {#1973 …}
+reports: Doctrine\ORM\PersistentCollection {#1959 …}
+favourites: Doctrine\ORM\PersistentCollection {#1927 …}
+notifications: Doctrine\ORM\PersistentCollection {#2442 …}
+badges: Doctrine\ORM\PersistentCollection {#2440 …}
+children: []
-id: 25403
-titleTs: "'question':1 'rsync':4 'zfs':2"
-bodyTs: "'-45':79 '10':278 '100':97 '12tb':46,63 '18tb':31 '30':78 '3tb':198 '4':90 '4x':30 'access':113 'al':320 'appear':112,120 'attach':48,275 'attempt':56 'bad':268 'better':250 'built':22 'c':152 'card':281 'copi':65,86,196,243 'could':264,270,282 'ctrl':151 'current':85,91 'd':260 'dabbl':296 'defect':285 'differ':51,263 'diminish':88 'disappear':211 'dmesg':318 'doesn':110 'done':98 'drive':32,64,266 'drop':231 'edit':132 'et':319 'eventu':99 'extern':44 'fair':183 'far':306 'fault':123 'fellow':2 'file':92 'fine':121 'first':77 'free':186 'get':61,145 'go':73 'great':74 'hand':185 'hands-fre':184 'hang':162 'harddriv':47 'help':7,293 'hey':1 'hope':18 'imagin':258 'initi':238 'intricaci':316 'isn':12 'know':314 'like':212 'linux':298 'long':193,220,301 'long-term':219 'longer':115 'm':17,228,305 'machin':52,106 'make':181,214,261 'minut':80 'much':262 'nas':25,241 'need':5 'nervous':216 'network':28 'new':24,69 'note':134 'old':246 'output':165 'partial':172 'point':83,175 'pool':37,70,119 'port':279 'previous':39 'process':137,194 'profici':308 'progress':94 'pull':254 'push':252 'raidz1':36 're':274 'readi':229 'realli':313 'reboot':104,117 'recent':21 'respect':148 'resum':127 'rsync':59,128,136,178,235 'run':159 'sata':280 'search':11 'seem':138,169 'selfhost':3 'separ':157 'server':247 'sigint':149 'sit':95 'speed':87 'ssh':155 'stall':140 'status':161 'success':173 'tell':272,289 'term':221 'thank':290 'thing':72 'think':9,206,226 'time':302 'unraid':234 'usb':45 'use':42,58,177 've':54,101,295 'viabil':222 'week':192 'workaround':168 'would':287 'yield':14 'zfs':35 'zpool':109,160,208"
+cross: false
+upVotes: 0
+downVotes: 0
+ranking: 1703862554
+visibility: "visible "
+apId: "https://lemmy.world/post/10061829"
+editedAt: DateTimeImmutable @1708796667 {#1850
date: 2024-02-24 18:44:27.0 +01:00
}
+createdAt: DateTimeImmutable @1703776154 {#2420
date: 2023-12-28 16:09:14.0 +01:00
}
}
+magazine: App\Entity\Magazine {#266}
+image: null
+parent: App\Entity\EntryComment {#4162
+user: App\Entity\User {#4175 …}
+entry: App\Entity\Entry {#2412}
+magazine: App\Entity\Magazine {#266}
+image: null
+parent: null
+root: null
+body: "Have you tried running it overnight to make sure its not just a performance thing?"
+lang: "en"
+isAdult: false
+favouriteCount: 2
+score: 0
+lastActive: DateTime @1711281640 {#4157
date: 2024-03-24 13:00:40.0 +01:00
}
+ip: null
+tags: null
+mentions: [
"@isles@lemmy.world"
]
+children: Doctrine\ORM\PersistentCollection {#4163 …}
+nested: Doctrine\ORM\PersistentCollection {#4165 …}
+votes: Doctrine\ORM\PersistentCollection {#4167 …}
+reports: Doctrine\ORM\PersistentCollection {#4169 …}
+favourites: Doctrine\ORM\PersistentCollection {#4171 …}
+notifications: Doctrine\ORM\PersistentCollection {#4173 …}
-id: 260990
-bodyTs: "'make':8 'overnight':6 'perform':14 'run':4 'sure':9 'thing':15 'tri':3"
+ranking: 0
+commentCount: 0
+upVotes: 0
+downVotes: 0
+visibility: "visible "
+apId: "https://lemmy.zip/comment/5828995"
+editedAt: null
+createdAt: DateTimeImmutable @1703810188 {#4158
date: 2023-12-29 01:36:28.0 +01:00
}
}
+root: App\Entity\EntryComment {#4162}
+body: "I did, great suggestion. It never recovered."
+lang: "en"
+isAdult: false
+favouriteCount: 0
+score: 0
+lastActive: DateTime @1704301331 {#4584
date: 2024-01-03 18:02:11.0 +01:00
}
+ip: null
+tags: null
+mentions: [
"@isles@lemmy.world"
"@possiblylinux127@lemmy.zip"
]
+children: Doctrine\ORM\PersistentCollection {#4587 …}
+nested: Doctrine\ORM\PersistentCollection {#4589 …}
+votes: Doctrine\ORM\PersistentCollection {#4591 …}
+reports: Doctrine\ORM\PersistentCollection {#4593 …}
+favourites: Doctrine\ORM\PersistentCollection {#4595 …}
+notifications: Doctrine\ORM\PersistentCollection {#4597 …}
-id: 276934
-bodyTs: "'great':3 'never':6 'recov':7 'suggest':4"
+ranking: 0
+commentCount: 0
+upVotes: 0
+downVotes: 0
+visibility: "visible "
+apId: "https://lemmy.world/comment/6383634"
+editedAt: null
+createdAt: DateTimeImmutable @1704301331 {#4585
date: 2024-01-03 18:02:11.0 +01:00
}
} |
|
Show voter details
|
33 |
DENIED
|
ROLE_USER
|
null |
|
Show voter details
|
34 |
DENIED
|
moderate
|
App\Entity\EntryComment {#4235
+user: App\Entity\User {#4248 …}
+entry: App\Entity\Entry {#2412
+user: Proxies\__CG__\App\Entity\User {#1970 …}
+magazine: App\Entity\Magazine {#266
+icon: Proxies\__CG__\App\Entity\Image {#247 …}
+name: "selfhosted@lemmy.world"
+title: "selfhosted"
+description: """
A place to share alternatives to popular online services that can be self-hosted without giving up privacy or locking you into a service you don’t control.\n
\n
Rules:\n
\n
- Be civil: we’re here to support and learn from one another. Insults won’t be tolerated. Flame wars are frowned upon.\n
- No spam posting.\n
- Don’t duplicate the full text of your blog or github here. Just post the link for folks to click.\n
- Submission headline should match the article title (don’t cherry-pick information from the title to fit your agenda).\n
- No trolling.\n
\n
Resources:\n
\n
- [awesome-selfhosted software](https://github.com/awesome-selfhosted/awesome-selfhosted)\n
- [awesome-sysadmin](https://github.com/awesome-foss/awesome-sysadmin) resources\n
- [Self-Hosted Podcast from Jupiter Broadcasting](https://selfhosted.show)\n
\n
> Any issues on the community? Report it using the report flag.\n
\n
> Questions? DM the mods!
"""
+rules: null
+subscriptionsCount: 1
+entryCount: 222
+entryCommentCount: 3916
+postCount: 0
+postCommentCount: 0
+isAdult: false
+customCss: null
+lastActive: DateTime @1729582735 {#276
date: 2024-10-22 09:38:55.0 +02:00
}
+markedForDeletionAt: null
+tags: null
+moderators: Doctrine\ORM\PersistentCollection {#238 …}
+ownershipRequests: Doctrine\ORM\PersistentCollection {#234 …}
+moderatorRequests: Doctrine\ORM\PersistentCollection {#223 …}
+entries: Doctrine\ORM\PersistentCollection {#181 …}
+posts: Doctrine\ORM\PersistentCollection {#139 …}
+subscriptions: Doctrine\ORM\PersistentCollection {#201 …}
+bans: Doctrine\ORM\PersistentCollection {#118 …}
+reports: Doctrine\ORM\PersistentCollection {#104 …}
+badges: Doctrine\ORM\PersistentCollection {#82 …}
+logs: Doctrine\ORM\PersistentCollection {#72 …}
+awards: Doctrine\ORM\PersistentCollection {#61 …}
+categories: Doctrine\ORM\PersistentCollection {#1820 …}
-id: 120
+apId: "selfhosted@lemmy.world"
+apProfileId: "https://lemmy.world/c/selfhosted"
+apPublicUrl: "https://lemmy.world/c/selfhosted"
+apFollowersUrl: "https://lemmy.world/c/selfhosted/followers"
+apInboxUrl: "https://lemmy.world/inbox"
+apDomain: "lemmy.world"
+apPreferredUsername: "selfhosted"
+apDiscoverable: true
+apManuallyApprovesFollowers: null
+privateKey: null
+publicKey: null
+apFetchedAt: DateTime @1703473826 {#270
date: 2023-12-25 04:10:26.0 +01:00
}
+apDeletedAt: null
+apTimeoutAt: null
+visibility: "visible "
+createdAt: DateTimeImmutable @1703473826 {#272
date: 2023-12-25 04:10:26.0 +01:00
}
}
+image: null
+domain: Proxies\__CG__\App\Entity\Domain {#1915 …}
+slug: "Question-ZFS-and-rsync"
+title: "Question - ZFS and rsync"
+url: null
+body: """
Hey fellow Selfhosters! I need some help, I think, and searching isn’t yielding what I’m hoping for.\n
\n
I recently built a new NAS for my network with 4x 18TB drives in a ZFS raidz1 pool. I previously have been using an external USB 12TB harddrive attached to a different machine.\n
\n
I’ve been attempting to use rsync to get the 12TB drive copied over to the new pool and things go great for the first 30-45 minutes. At that point, the current copy speed diminishes and 4 current files in progress sit at 100% done. Eventually, I’ve had to reboot the machine, because the zpool doesn’t appear accessible any longer. After reboot, the pool appears fine, no faults, and I can resume rsync for a while.\n
\n
EDIT: Of note, the rsync process seems to stall and I can’t get it to respect SIGINT or Ctrl+C. I can SSH in separately and running `zpool status` hangs with no output.\n
\n
While the workaround seems to be partially successful, the point of using rsync is to make it fairly hands-free and it’s been a week long process to copy the 3TB that I have now. I don’t think my zpool should be disappearing like that! Makes me nervous about the long-term viability. I don’t think I’m ready to drop down on Unraid.\n
\n
rsync is being initiated from the NAS to copy from the old server, am I better off “pushing” than “pulling”? I can’t imagine it’d make much difference.\n
\n
Could my drives be bad? How could I tell? They’re attached to a 10 port SATA card, could that be defective? How would I tell?\n
\n
Thanks for any help! I’ve dabbled in linux for a long time, but I’m far from proficient, so I don’t really know the intricacies of dmesg et al.
"""
+type: "article"
+lang: "en"
+isOc: false
+hasEmbed: false
+commentCount: 17
+favouriteCount: 36
+score: 0
+isAdult: false
+sticky: false
+lastActive: DateTime @1711558563 {#2410
date: 2024-03-27 17:56:03.0 +01:00
}
+ip: null
+adaAmount: 0
+tags: null
+mentions: null
+comments: Doctrine\ORM\PersistentCollection {#1884 …}
+votes: Doctrine\ORM\PersistentCollection {#1973 …}
+reports: Doctrine\ORM\PersistentCollection {#1959 …}
+favourites: Doctrine\ORM\PersistentCollection {#1927 …}
+notifications: Doctrine\ORM\PersistentCollection {#2442 …}
+badges: Doctrine\ORM\PersistentCollection {#2440 …}
+children: []
-id: 25403
-titleTs: "'question':1 'rsync':4 'zfs':2"
-bodyTs: "'-45':79 '10':278 '100':97 '12tb':46,63 '18tb':31 '30':78 '3tb':198 '4':90 '4x':30 'access':113 'al':320 'appear':112,120 'attach':48,275 'attempt':56 'bad':268 'better':250 'built':22 'c':152 'card':281 'copi':65,86,196,243 'could':264,270,282 'ctrl':151 'current':85,91 'd':260 'dabbl':296 'defect':285 'differ':51,263 'diminish':88 'disappear':211 'dmesg':318 'doesn':110 'done':98 'drive':32,64,266 'drop':231 'edit':132 'et':319 'eventu':99 'extern':44 'fair':183 'far':306 'fault':123 'fellow':2 'file':92 'fine':121 'first':77 'free':186 'get':61,145 'go':73 'great':74 'hand':185 'hands-fre':184 'hang':162 'harddriv':47 'help':7,293 'hey':1 'hope':18 'imagin':258 'initi':238 'intricaci':316 'isn':12 'know':314 'like':212 'linux':298 'long':193,220,301 'long-term':219 'longer':115 'm':17,228,305 'machin':52,106 'make':181,214,261 'minut':80 'much':262 'nas':25,241 'need':5 'nervous':216 'network':28 'new':24,69 'note':134 'old':246 'output':165 'partial':172 'point':83,175 'pool':37,70,119 'port':279 'previous':39 'process':137,194 'profici':308 'progress':94 'pull':254 'push':252 'raidz1':36 're':274 'readi':229 'realli':313 'reboot':104,117 'recent':21 'respect':148 'resum':127 'rsync':59,128,136,178,235 'run':159 'sata':280 'search':11 'seem':138,169 'selfhost':3 'separ':157 'server':247 'sigint':149 'sit':95 'speed':87 'ssh':155 'stall':140 'status':161 'success':173 'tell':272,289 'term':221 'thank':290 'thing':72 'think':9,206,226 'time':302 'unraid':234 'usb':45 'use':42,58,177 've':54,101,295 'viabil':222 'week':192 'workaround':168 'would':287 'yield':14 'zfs':35 'zpool':109,160,208"
+cross: false
+upVotes: 0
+downVotes: 0
+ranking: 1703862554
+visibility: "visible "
+apId: "https://lemmy.world/post/10061829"
+editedAt: DateTimeImmutable @1708796667 {#1850
date: 2024-02-24 18:44:27.0 +01:00
}
+createdAt: DateTimeImmutable @1703776154 {#2420
date: 2023-12-28 16:09:14.0 +01:00
}
}
+magazine: App\Entity\Magazine {#266}
+image: null
+parent: null
+root: null
+body: "If you’re running TrueNAS, the replication feature was the smoothest and easiest way to move large amounts of data when I did it 18 months back. Once the destination location was accessible from the sending host, it was as simple as kicking off a snapshot, resulting in a fully usable replica on the receiving host. IIRC, IXsystems staff told me rsync can be problematic compared to the replication/snapshot system, as permissions and other metadata can be lost."
+lang: "en"
+isAdult: false
+favouriteCount: 2
+score: 0
+lastActive: DateTime @1703797964 {#4230
date: 2023-12-28 22:12:44.0 +01:00
}
+ip: null
+tags: null
+mentions: [
"@isles@lemmy.world"
]
+children: Doctrine\ORM\PersistentCollection {#4236 …}
+nested: Doctrine\ORM\PersistentCollection {#4238 …}
+votes: Doctrine\ORM\PersistentCollection {#4240 …}
+reports: Doctrine\ORM\PersistentCollection {#4242 …}
+favourites: Doctrine\ORM\PersistentCollection {#4244 …}
+notifications: Doctrine\ORM\PersistentCollection {#4246 …}
-id: 260450
-bodyTs: "'18':25 'access':33 'amount':18 'back':27 'compar':66 'data':20 'destin':30 'easiest':13 'featur':8 'fulli':50 'host':37,56 'iirc':57 'ixsystem':58 'kick':43 'larg':17 'locat':31 'lost':78 'metadata':75 'month':26 'move':16 'permiss':72 'problemat':65 're':3 'receiv':55 'replic':7 'replica':52 'replication/snapshot':69 'result':47 'rsync':62 'run':4 'send':36 'simpl':41 'smoothest':11 'snapshot':46 'staff':59 'system':70 'told':60 'truena':5 'usabl':51 'way':14"
+ranking: 0
+commentCount: 0
+upVotes: 0
+downVotes: 0
+visibility: "visible "
+apId: "https://lemmy.world/comment/6282397"
+editedAt: null
+createdAt: DateTimeImmutable @1703797964 {#4231
date: 2023-12-28 22:12:44.0 +01:00
}
} |
|
Show voter details
|
35 |
DENIED
|
edit
|
App\Entity\EntryComment {#4235
+user: App\Entity\User {#4248 …}
+entry: App\Entity\Entry {#2412
+user: Proxies\__CG__\App\Entity\User {#1970 …}
+magazine: App\Entity\Magazine {#266
+icon: Proxies\__CG__\App\Entity\Image {#247 …}
+name: "selfhosted@lemmy.world"
+title: "selfhosted"
+description: """
A place to share alternatives to popular online services that can be self-hosted without giving up privacy or locking you into a service you don’t control.\n
\n
Rules:\n
\n
- Be civil: we’re here to support and learn from one another. Insults won’t be tolerated. Flame wars are frowned upon.\n
- No spam posting.\n
- Don’t duplicate the full text of your blog or github here. Just post the link for folks to click.\n
- Submission headline should match the article title (don’t cherry-pick information from the title to fit your agenda).\n
- No trolling.\n
\n
Resources:\n
\n
- [awesome-selfhosted software](https://github.com/awesome-selfhosted/awesome-selfhosted)\n
- [awesome-sysadmin](https://github.com/awesome-foss/awesome-sysadmin) resources\n
- [Self-Hosted Podcast from Jupiter Broadcasting](https://selfhosted.show)\n
\n
> Any issues on the community? Report it using the report flag.\n
\n
> Questions? DM the mods!
"""
+rules: null
+subscriptionsCount: 1
+entryCount: 222
+entryCommentCount: 3916
+postCount: 0
+postCommentCount: 0
+isAdult: false
+customCss: null
+lastActive: DateTime @1729582735 {#276
date: 2024-10-22 09:38:55.0 +02:00
}
+markedForDeletionAt: null
+tags: null
+moderators: Doctrine\ORM\PersistentCollection {#238 …}
+ownershipRequests: Doctrine\ORM\PersistentCollection {#234 …}
+moderatorRequests: Doctrine\ORM\PersistentCollection {#223 …}
+entries: Doctrine\ORM\PersistentCollection {#181 …}
+posts: Doctrine\ORM\PersistentCollection {#139 …}
+subscriptions: Doctrine\ORM\PersistentCollection {#201 …}
+bans: Doctrine\ORM\PersistentCollection {#118 …}
+reports: Doctrine\ORM\PersistentCollection {#104 …}
+badges: Doctrine\ORM\PersistentCollection {#82 …}
+logs: Doctrine\ORM\PersistentCollection {#72 …}
+awards: Doctrine\ORM\PersistentCollection {#61 …}
+categories: Doctrine\ORM\PersistentCollection {#1820 …}
-id: 120
+apId: "selfhosted@lemmy.world"
+apProfileId: "https://lemmy.world/c/selfhosted"
+apPublicUrl: "https://lemmy.world/c/selfhosted"
+apFollowersUrl: "https://lemmy.world/c/selfhosted/followers"
+apInboxUrl: "https://lemmy.world/inbox"
+apDomain: "lemmy.world"
+apPreferredUsername: "selfhosted"
+apDiscoverable: true
+apManuallyApprovesFollowers: null
+privateKey: null
+publicKey: null
+apFetchedAt: DateTime @1703473826 {#270
date: 2023-12-25 04:10:26.0 +01:00
}
+apDeletedAt: null
+apTimeoutAt: null
+visibility: "visible "
+createdAt: DateTimeImmutable @1703473826 {#272
date: 2023-12-25 04:10:26.0 +01:00
}
}
+image: null
+domain: Proxies\__CG__\App\Entity\Domain {#1915 …}
+slug: "Question-ZFS-and-rsync"
+title: "Question - ZFS and rsync"
+url: null
+body: """
Hey fellow Selfhosters! I need some help, I think, and searching isn’t yielding what I’m hoping for.\n
\n
I recently built a new NAS for my network with 4x 18TB drives in a ZFS raidz1 pool. I previously have been using an external USB 12TB harddrive attached to a different machine.\n
\n
I’ve been attempting to use rsync to get the 12TB drive copied over to the new pool and things go great for the first 30-45 minutes. At that point, the current copy speed diminishes and 4 current files in progress sit at 100% done. Eventually, I’ve had to reboot the machine, because the zpool doesn’t appear accessible any longer. After reboot, the pool appears fine, no faults, and I can resume rsync for a while.\n
\n
EDIT: Of note, the rsync process seems to stall and I can’t get it to respect SIGINT or Ctrl+C. I can SSH in separately and running `zpool status` hangs with no output.\n
\n
While the workaround seems to be partially successful, the point of using rsync is to make it fairly hands-free and it’s been a week long process to copy the 3TB that I have now. I don’t think my zpool should be disappearing like that! Makes me nervous about the long-term viability. I don’t think I’m ready to drop down on Unraid.\n
\n
rsync is being initiated from the NAS to copy from the old server, am I better off “pushing” than “pulling”? I can’t imagine it’d make much difference.\n
\n
Could my drives be bad? How could I tell? They’re attached to a 10 port SATA card, could that be defective? How would I tell?\n
\n
Thanks for any help! I’ve dabbled in linux for a long time, but I’m far from proficient, so I don’t really know the intricacies of dmesg et al.
"""
+type: "article"
+lang: "en"
+isOc: false
+hasEmbed: false
+commentCount: 17
+favouriteCount: 36
+score: 0
+isAdult: false
+sticky: false
+lastActive: DateTime @1711558563 {#2410
date: 2024-03-27 17:56:03.0 +01:00
}
+ip: null
+adaAmount: 0
+tags: null
+mentions: null
+comments: Doctrine\ORM\PersistentCollection {#1884 …}
+votes: Doctrine\ORM\PersistentCollection {#1973 …}
+reports: Doctrine\ORM\PersistentCollection {#1959 …}
+favourites: Doctrine\ORM\PersistentCollection {#1927 …}
+notifications: Doctrine\ORM\PersistentCollection {#2442 …}
+badges: Doctrine\ORM\PersistentCollection {#2440 …}
+children: []
-id: 25403
-titleTs: "'question':1 'rsync':4 'zfs':2"
-bodyTs: "'-45':79 '10':278 '100':97 '12tb':46,63 '18tb':31 '30':78 '3tb':198 '4':90 '4x':30 'access':113 'al':320 'appear':112,120 'attach':48,275 'attempt':56 'bad':268 'better':250 'built':22 'c':152 'card':281 'copi':65,86,196,243 'could':264,270,282 'ctrl':151 'current':85,91 'd':260 'dabbl':296 'defect':285 'differ':51,263 'diminish':88 'disappear':211 'dmesg':318 'doesn':110 'done':98 'drive':32,64,266 'drop':231 'edit':132 'et':319 'eventu':99 'extern':44 'fair':183 'far':306 'fault':123 'fellow':2 'file':92 'fine':121 'first':77 'free':186 'get':61,145 'go':73 'great':74 'hand':185 'hands-fre':184 'hang':162 'harddriv':47 'help':7,293 'hey':1 'hope':18 'imagin':258 'initi':238 'intricaci':316 'isn':12 'know':314 'like':212 'linux':298 'long':193,220,301 'long-term':219 'longer':115 'm':17,228,305 'machin':52,106 'make':181,214,261 'minut':80 'much':262 'nas':25,241 'need':5 'nervous':216 'network':28 'new':24,69 'note':134 'old':246 'output':165 'partial':172 'point':83,175 'pool':37,70,119 'port':279 'previous':39 'process':137,194 'profici':308 'progress':94 'pull':254 'push':252 'raidz1':36 're':274 'readi':229 'realli':313 'reboot':104,117 'recent':21 'respect':148 'resum':127 'rsync':59,128,136,178,235 'run':159 'sata':280 'search':11 'seem':138,169 'selfhost':3 'separ':157 'server':247 'sigint':149 'sit':95 'speed':87 'ssh':155 'stall':140 'status':161 'success':173 'tell':272,289 'term':221 'thank':290 'thing':72 'think':9,206,226 'time':302 'unraid':234 'usb':45 'use':42,58,177 've':54,101,295 'viabil':222 'week':192 'workaround':168 'would':287 'yield':14 'zfs':35 'zpool':109,160,208"
+cross: false
+upVotes: 0
+downVotes: 0
+ranking: 1703862554
+visibility: "visible "
+apId: "https://lemmy.world/post/10061829"
+editedAt: DateTimeImmutable @1708796667 {#1850
date: 2024-02-24 18:44:27.0 +01:00
}
+createdAt: DateTimeImmutable @1703776154 {#2420
date: 2023-12-28 16:09:14.0 +01:00
}
}
+magazine: App\Entity\Magazine {#266}
+image: null
+parent: null
+root: null
+body: "If you’re running TrueNAS, the replication feature was the smoothest and easiest way to move large amounts of data when I did it 18 months back. Once the destination location was accessible from the sending host, it was as simple as kicking off a snapshot, resulting in a fully usable replica on the receiving host. IIRC, IXsystems staff told me rsync can be problematic compared to the replication/snapshot system, as permissions and other metadata can be lost."
+lang: "en"
+isAdult: false
+favouriteCount: 2
+score: 0
+lastActive: DateTime @1703797964 {#4230
date: 2023-12-28 22:12:44.0 +01:00
}
+ip: null
+tags: null
+mentions: [
"@isles@lemmy.world"
]
+children: Doctrine\ORM\PersistentCollection {#4236 …}
+nested: Doctrine\ORM\PersistentCollection {#4238 …}
+votes: Doctrine\ORM\PersistentCollection {#4240 …}
+reports: Doctrine\ORM\PersistentCollection {#4242 …}
+favourites: Doctrine\ORM\PersistentCollection {#4244 …}
+notifications: Doctrine\ORM\PersistentCollection {#4246 …}
-id: 260450
-bodyTs: "'18':25 'access':33 'amount':18 'back':27 'compar':66 'data':20 'destin':30 'easiest':13 'featur':8 'fulli':50 'host':37,56 'iirc':57 'ixsystem':58 'kick':43 'larg':17 'locat':31 'lost':78 'metadata':75 'month':26 'move':16 'permiss':72 'problemat':65 're':3 'receiv':55 'replic':7 'replica':52 'replication/snapshot':69 'result':47 'rsync':62 'run':4 'send':36 'simpl':41 'smoothest':11 'snapshot':46 'staff':59 'system':70 'told':60 'truena':5 'usabl':51 'way':14"
+ranking: 0
+commentCount: 0
+upVotes: 0
+downVotes: 0
+visibility: "visible "
+apId: "https://lemmy.world/comment/6282397"
+editedAt: null
+createdAt: DateTimeImmutable @1703797964 {#4231
date: 2023-12-28 22:12:44.0 +01:00
}
} |
|
Show voter details
|
36 |
DENIED
|
moderate
|
App\Entity\EntryComment {#4235
+user: App\Entity\User {#4248 …}
+entry: App\Entity\Entry {#2412
+user: Proxies\__CG__\App\Entity\User {#1970 …}
+magazine: App\Entity\Magazine {#266
+icon: Proxies\__CG__\App\Entity\Image {#247 …}
+name: "selfhosted@lemmy.world"
+title: "selfhosted"
+description: """
A place to share alternatives to popular online services that can be self-hosted without giving up privacy or locking you into a service you don’t control.\n
\n
Rules:\n
\n
- Be civil: we’re here to support and learn from one another. Insults won’t be tolerated. Flame wars are frowned upon.\n
- No spam posting.\n
- Don’t duplicate the full text of your blog or github here. Just post the link for folks to click.\n
- Submission headline should match the article title (don’t cherry-pick information from the title to fit your agenda).\n
- No trolling.\n
\n
Resources:\n
\n
- [awesome-selfhosted software](https://github.com/awesome-selfhosted/awesome-selfhosted)\n
- [awesome-sysadmin](https://github.com/awesome-foss/awesome-sysadmin) resources\n
- [Self-Hosted Podcast from Jupiter Broadcasting](https://selfhosted.show)\n
\n
> Any issues on the community? Report it using the report flag.\n
\n
> Questions? DM the mods!
"""
+rules: null
+subscriptionsCount: 1
+entryCount: 222
+entryCommentCount: 3916
+postCount: 0
+postCommentCount: 0
+isAdult: false
+customCss: null
+lastActive: DateTime @1729582735 {#276
date: 2024-10-22 09:38:55.0 +02:00
}
+markedForDeletionAt: null
+tags: null
+moderators: Doctrine\ORM\PersistentCollection {#238 …}
+ownershipRequests: Doctrine\ORM\PersistentCollection {#234 …}
+moderatorRequests: Doctrine\ORM\PersistentCollection {#223 …}
+entries: Doctrine\ORM\PersistentCollection {#181 …}
+posts: Doctrine\ORM\PersistentCollection {#139 …}
+subscriptions: Doctrine\ORM\PersistentCollection {#201 …}
+bans: Doctrine\ORM\PersistentCollection {#118 …}
+reports: Doctrine\ORM\PersistentCollection {#104 …}
+badges: Doctrine\ORM\PersistentCollection {#82 …}
+logs: Doctrine\ORM\PersistentCollection {#72 …}
+awards: Doctrine\ORM\PersistentCollection {#61 …}
+categories: Doctrine\ORM\PersistentCollection {#1820 …}
-id: 120
+apId: "selfhosted@lemmy.world"
+apProfileId: "https://lemmy.world/c/selfhosted"
+apPublicUrl: "https://lemmy.world/c/selfhosted"
+apFollowersUrl: "https://lemmy.world/c/selfhosted/followers"
+apInboxUrl: "https://lemmy.world/inbox"
+apDomain: "lemmy.world"
+apPreferredUsername: "selfhosted"
+apDiscoverable: true
+apManuallyApprovesFollowers: null
+privateKey: null
+publicKey: null
+apFetchedAt: DateTime @1703473826 {#270
date: 2023-12-25 04:10:26.0 +01:00
}
+apDeletedAt: null
+apTimeoutAt: null
+visibility: "visible "
+createdAt: DateTimeImmutable @1703473826 {#272
date: 2023-12-25 04:10:26.0 +01:00
}
}
+image: null
+domain: Proxies\__CG__\App\Entity\Domain {#1915 …}
+slug: "Question-ZFS-and-rsync"
+title: "Question - ZFS and rsync"
+url: null
+body: """
Hey fellow Selfhosters! I need some help, I think, and searching isn’t yielding what I’m hoping for.\n
\n
I recently built a new NAS for my network with 4x 18TB drives in a ZFS raidz1 pool. I previously have been using an external USB 12TB harddrive attached to a different machine.\n
\n
I’ve been attempting to use rsync to get the 12TB drive copied over to the new pool and things go great for the first 30-45 minutes. At that point, the current copy speed diminishes and 4 current files in progress sit at 100% done. Eventually, I’ve had to reboot the machine, because the zpool doesn’t appear accessible any longer. After reboot, the pool appears fine, no faults, and I can resume rsync for a while.\n
\n
EDIT: Of note, the rsync process seems to stall and I can’t get it to respect SIGINT or Ctrl+C. I can SSH in separately and running `zpool status` hangs with no output.\n
\n
While the workaround seems to be partially successful, the point of using rsync is to make it fairly hands-free and it’s been a week long process to copy the 3TB that I have now. I don’t think my zpool should be disappearing like that! Makes me nervous about the long-term viability. I don’t think I’m ready to drop down on Unraid.\n
\n
rsync is being initiated from the NAS to copy from the old server, am I better off “pushing” than “pulling”? I can’t imagine it’d make much difference.\n
\n
Could my drives be bad? How could I tell? They’re attached to a 10 port SATA card, could that be defective? How would I tell?\n
\n
Thanks for any help! I’ve dabbled in linux for a long time, but I’m far from proficient, so I don’t really know the intricacies of dmesg et al.
"""
+type: "article"
+lang: "en"
+isOc: false
+hasEmbed: false
+commentCount: 17
+favouriteCount: 36
+score: 0
+isAdult: false
+sticky: false
+lastActive: DateTime @1711558563 {#2410
date: 2024-03-27 17:56:03.0 +01:00
}
+ip: null
+adaAmount: 0
+tags: null
+mentions: null
+comments: Doctrine\ORM\PersistentCollection {#1884 …}
+votes: Doctrine\ORM\PersistentCollection {#1973 …}
+reports: Doctrine\ORM\PersistentCollection {#1959 …}
+favourites: Doctrine\ORM\PersistentCollection {#1927 …}
+notifications: Doctrine\ORM\PersistentCollection {#2442 …}
+badges: Doctrine\ORM\PersistentCollection {#2440 …}
+children: []
-id: 25403
-titleTs: "'question':1 'rsync':4 'zfs':2"
-bodyTs: "'-45':79 '10':278 '100':97 '12tb':46,63 '18tb':31 '30':78 '3tb':198 '4':90 '4x':30 'access':113 'al':320 'appear':112,120 'attach':48,275 'attempt':56 'bad':268 'better':250 'built':22 'c':152 'card':281 'copi':65,86,196,243 'could':264,270,282 'ctrl':151 'current':85,91 'd':260 'dabbl':296 'defect':285 'differ':51,263 'diminish':88 'disappear':211 'dmesg':318 'doesn':110 'done':98 'drive':32,64,266 'drop':231 'edit':132 'et':319 'eventu':99 'extern':44 'fair':183 'far':306 'fault':123 'fellow':2 'file':92 'fine':121 'first':77 'free':186 'get':61,145 'go':73 'great':74 'hand':185 'hands-fre':184 'hang':162 'harddriv':47 'help':7,293 'hey':1 'hope':18 'imagin':258 'initi':238 'intricaci':316 'isn':12 'know':314 'like':212 'linux':298 'long':193,220,301 'long-term':219 'longer':115 'm':17,228,305 'machin':52,106 'make':181,214,261 'minut':80 'much':262 'nas':25,241 'need':5 'nervous':216 'network':28 'new':24,69 'note':134 'old':246 'output':165 'partial':172 'point':83,175 'pool':37,70,119 'port':279 'previous':39 'process':137,194 'profici':308 'progress':94 'pull':254 'push':252 'raidz1':36 're':274 'readi':229 'realli':313 'reboot':104,117 'recent':21 'respect':148 'resum':127 'rsync':59,128,136,178,235 'run':159 'sata':280 'search':11 'seem':138,169 'selfhost':3 'separ':157 'server':247 'sigint':149 'sit':95 'speed':87 'ssh':155 'stall':140 'status':161 'success':173 'tell':272,289 'term':221 'thank':290 'thing':72 'think':9,206,226 'time':302 'unraid':234 'usb':45 'use':42,58,177 've':54,101,295 'viabil':222 'week':192 'workaround':168 'would':287 'yield':14 'zfs':35 'zpool':109,160,208"
+cross: false
+upVotes: 0
+downVotes: 0
+ranking: 1703862554
+visibility: "visible "
+apId: "https://lemmy.world/post/10061829"
+editedAt: DateTimeImmutable @1708796667 {#1850
date: 2024-02-24 18:44:27.0 +01:00
}
+createdAt: DateTimeImmutable @1703776154 {#2420
date: 2023-12-28 16:09:14.0 +01:00
}
}
+magazine: App\Entity\Magazine {#266}
+image: null
+parent: null
+root: null
+body: "If you’re running TrueNAS, the replication feature was the smoothest and easiest way to move large amounts of data when I did it 18 months back. Once the destination location was accessible from the sending host, it was as simple as kicking off a snapshot, resulting in a fully usable replica on the receiving host. IIRC, IXsystems staff told me rsync can be problematic compared to the replication/snapshot system, as permissions and other metadata can be lost."
+lang: "en"
+isAdult: false
+favouriteCount: 2
+score: 0
+lastActive: DateTime @1703797964 {#4230
date: 2023-12-28 22:12:44.0 +01:00
}
+ip: null
+tags: null
+mentions: [
"@isles@lemmy.world"
]
+children: Doctrine\ORM\PersistentCollection {#4236 …}
+nested: Doctrine\ORM\PersistentCollection {#4238 …}
+votes: Doctrine\ORM\PersistentCollection {#4240 …}
+reports: Doctrine\ORM\PersistentCollection {#4242 …}
+favourites: Doctrine\ORM\PersistentCollection {#4244 …}
+notifications: Doctrine\ORM\PersistentCollection {#4246 …}
-id: 260450
-bodyTs: "'18':25 'access':33 'amount':18 'back':27 'compar':66 'data':20 'destin':30 'easiest':13 'featur':8 'fulli':50 'host':37,56 'iirc':57 'ixsystem':58 'kick':43 'larg':17 'locat':31 'lost':78 'metadata':75 'month':26 'move':16 'permiss':72 'problemat':65 're':3 'receiv':55 'replic':7 'replica':52 'replication/snapshot':69 'result':47 'rsync':62 'run':4 'send':36 'simpl':41 'smoothest':11 'snapshot':46 'staff':59 'system':70 'told':60 'truena':5 'usabl':51 'way':14"
+ranking: 0
+commentCount: 0
+upVotes: 0
+downVotes: 0
+visibility: "visible "
+apId: "https://lemmy.world/comment/6282397"
+editedAt: null
+createdAt: DateTimeImmutable @1703797964 {#4231
date: 2023-12-28 22:12:44.0 +01:00
}
} |
|
Show voter details
|
37 |
DENIED
|
ROLE_USER
|
null |
|
Show voter details
|
38 |
DENIED
|
moderate
|
App\Entity\EntryComment {#4310
+user: App\Entity\User {#4323 …}
+entry: App\Entity\Entry {#2412
+user: Proxies\__CG__\App\Entity\User {#1970 …}
+magazine: App\Entity\Magazine {#266
+icon: Proxies\__CG__\App\Entity\Image {#247 …}
+name: "selfhosted@lemmy.world"
+title: "selfhosted"
+description: """
A place to share alternatives to popular online services that can be self-hosted without giving up privacy or locking you into a service you don’t control.\n
\n
Rules:\n
\n
- Be civil: we’re here to support and learn from one another. Insults won’t be tolerated. Flame wars are frowned upon.\n
- No spam posting.\n
- Don’t duplicate the full text of your blog or github here. Just post the link for folks to click.\n
- Submission headline should match the article title (don’t cherry-pick information from the title to fit your agenda).\n
- No trolling.\n
\n
Resources:\n
\n
- [awesome-selfhosted software](https://github.com/awesome-selfhosted/awesome-selfhosted)\n
- [awesome-sysadmin](https://github.com/awesome-foss/awesome-sysadmin) resources\n
- [Self-Hosted Podcast from Jupiter Broadcasting](https://selfhosted.show)\n
\n
> Any issues on the community? Report it using the report flag.\n
\n
> Questions? DM the mods!
"""
+rules: null
+subscriptionsCount: 1
+entryCount: 222
+entryCommentCount: 3916
+postCount: 0
+postCommentCount: 0
+isAdult: false
+customCss: null
+lastActive: DateTime @1729582735 {#276
date: 2024-10-22 09:38:55.0 +02:00
}
+markedForDeletionAt: null
+tags: null
+moderators: Doctrine\ORM\PersistentCollection {#238 …}
+ownershipRequests: Doctrine\ORM\PersistentCollection {#234 …}
+moderatorRequests: Doctrine\ORM\PersistentCollection {#223 …}
+entries: Doctrine\ORM\PersistentCollection {#181 …}
+posts: Doctrine\ORM\PersistentCollection {#139 …}
+subscriptions: Doctrine\ORM\PersistentCollection {#201 …}
+bans: Doctrine\ORM\PersistentCollection {#118 …}
+reports: Doctrine\ORM\PersistentCollection {#104 …}
+badges: Doctrine\ORM\PersistentCollection {#82 …}
+logs: Doctrine\ORM\PersistentCollection {#72 …}
+awards: Doctrine\ORM\PersistentCollection {#61 …}
+categories: Doctrine\ORM\PersistentCollection {#1820 …}
-id: 120
+apId: "selfhosted@lemmy.world"
+apProfileId: "https://lemmy.world/c/selfhosted"
+apPublicUrl: "https://lemmy.world/c/selfhosted"
+apFollowersUrl: "https://lemmy.world/c/selfhosted/followers"
+apInboxUrl: "https://lemmy.world/inbox"
+apDomain: "lemmy.world"
+apPreferredUsername: "selfhosted"
+apDiscoverable: true
+apManuallyApprovesFollowers: null
+privateKey: null
+publicKey: null
+apFetchedAt: DateTime @1703473826 {#270
date: 2023-12-25 04:10:26.0 +01:00
}
+apDeletedAt: null
+apTimeoutAt: null
+visibility: "visible "
+createdAt: DateTimeImmutable @1703473826 {#272
date: 2023-12-25 04:10:26.0 +01:00
}
}
+image: null
+domain: Proxies\__CG__\App\Entity\Domain {#1915 …}
+slug: "Question-ZFS-and-rsync"
+title: "Question - ZFS and rsync"
+url: null
+body: """
Hey fellow Selfhosters! I need some help, I think, and searching isn’t yielding what I’m hoping for.\n
\n
I recently built a new NAS for my network with 4x 18TB drives in a ZFS raidz1 pool. I previously have been using an external USB 12TB harddrive attached to a different machine.\n
\n
I’ve been attempting to use rsync to get the 12TB drive copied over to the new pool and things go great for the first 30-45 minutes. At that point, the current copy speed diminishes and 4 current files in progress sit at 100% done. Eventually, I’ve had to reboot the machine, because the zpool doesn’t appear accessible any longer. After reboot, the pool appears fine, no faults, and I can resume rsync for a while.\n
\n
EDIT: Of note, the rsync process seems to stall and I can’t get it to respect SIGINT or Ctrl+C. I can SSH in separately and running `zpool status` hangs with no output.\n
\n
While the workaround seems to be partially successful, the point of using rsync is to make it fairly hands-free and it’s been a week long process to copy the 3TB that I have now. I don’t think my zpool should be disappearing like that! Makes me nervous about the long-term viability. I don’t think I’m ready to drop down on Unraid.\n
\n
rsync is being initiated from the NAS to copy from the old server, am I better off “pushing” than “pulling”? I can’t imagine it’d make much difference.\n
\n
Could my drives be bad? How could I tell? They’re attached to a 10 port SATA card, could that be defective? How would I tell?\n
\n
Thanks for any help! I’ve dabbled in linux for a long time, but I’m far from proficient, so I don’t really know the intricacies of dmesg et al.
"""
+type: "article"
+lang: "en"
+isOc: false
+hasEmbed: false
+commentCount: 17
+favouriteCount: 36
+score: 0
+isAdult: false
+sticky: false
+lastActive: DateTime @1711558563 {#2410
date: 2024-03-27 17:56:03.0 +01:00
}
+ip: null
+adaAmount: 0
+tags: null
+mentions: null
+comments: Doctrine\ORM\PersistentCollection {#1884 …}
+votes: Doctrine\ORM\PersistentCollection {#1973 …}
+reports: Doctrine\ORM\PersistentCollection {#1959 …}
+favourites: Doctrine\ORM\PersistentCollection {#1927 …}
+notifications: Doctrine\ORM\PersistentCollection {#2442 …}
+badges: Doctrine\ORM\PersistentCollection {#2440 …}
+children: []
-id: 25403
-titleTs: "'question':1 'rsync':4 'zfs':2"
-bodyTs: "'-45':79 '10':278 '100':97 '12tb':46,63 '18tb':31 '30':78 '3tb':198 '4':90 '4x':30 'access':113 'al':320 'appear':112,120 'attach':48,275 'attempt':56 'bad':268 'better':250 'built':22 'c':152 'card':281 'copi':65,86,196,243 'could':264,270,282 'ctrl':151 'current':85,91 'd':260 'dabbl':296 'defect':285 'differ':51,263 'diminish':88 'disappear':211 'dmesg':318 'doesn':110 'done':98 'drive':32,64,266 'drop':231 'edit':132 'et':319 'eventu':99 'extern':44 'fair':183 'far':306 'fault':123 'fellow':2 'file':92 'fine':121 'first':77 'free':186 'get':61,145 'go':73 'great':74 'hand':185 'hands-fre':184 'hang':162 'harddriv':47 'help':7,293 'hey':1 'hope':18 'imagin':258 'initi':238 'intricaci':316 'isn':12 'know':314 'like':212 'linux':298 'long':193,220,301 'long-term':219 'longer':115 'm':17,228,305 'machin':52,106 'make':181,214,261 'minut':80 'much':262 'nas':25,241 'need':5 'nervous':216 'network':28 'new':24,69 'note':134 'old':246 'output':165 'partial':172 'point':83,175 'pool':37,70,119 'port':279 'previous':39 'process':137,194 'profici':308 'progress':94 'pull':254 'push':252 'raidz1':36 're':274 'readi':229 'realli':313 'reboot':104,117 'recent':21 'respect':148 'resum':127 'rsync':59,128,136,178,235 'run':159 'sata':280 'search':11 'seem':138,169 'selfhost':3 'separ':157 'server':247 'sigint':149 'sit':95 'speed':87 'ssh':155 'stall':140 'status':161 'success':173 'tell':272,289 'term':221 'thank':290 'thing':72 'think':9,206,226 'time':302 'unraid':234 'usb':45 'use':42,58,177 've':54,101,295 'viabil':222 'week':192 'workaround':168 'would':287 'yield':14 'zfs':35 'zpool':109,160,208"
+cross: false
+upVotes: 0
+downVotes: 0
+ranking: 1703862554
+visibility: "visible "
+apId: "https://lemmy.world/post/10061829"
+editedAt: DateTimeImmutable @1708796667 {#1850
date: 2024-02-24 18:44:27.0 +01:00
}
+createdAt: DateTimeImmutable @1703776154 {#2420
date: 2023-12-28 16:09:14.0 +01:00
}
}
+magazine: App\Entity\Magazine {#266}
+image: null
+parent: null
+root: null
+body: """
One thing I haven’t seen mentioned here, zfs *can* be quite finicky with some sata cards, especially raid cards.\n
\n
I suggest you connect the hard drives to the motherboard directly and test again.
"""
+lang: "en"
+isAdult: false
+favouriteCount: 1
+score: 0
+lastActive: DateTime @1711279470 {#4305
date: 2024-03-24 12:24:30.0 +01:00
}
+ip: null
+tags: null
+mentions: [
"@isles@lemmy.world"
]
+children: Doctrine\ORM\PersistentCollection {#4311 …}
+nested: Doctrine\ORM\PersistentCollection {#4313 …}
+votes: Doctrine\ORM\PersistentCollection {#4315 …}
+reports: Doctrine\ORM\PersistentCollection {#4317 …}
+favourites: Doctrine\ORM\PersistentCollection {#4319 …}
+notifications: Doctrine\ORM\PersistentCollection {#4321 …}
-id: 264496
-bodyTs: "'card':17,20 'connect':24 'direct':31 'drive':27 'especi':18 'finicki':13 'hard':26 'haven':4 'mention':7 'motherboard':30 'one':1 'quit':12 'raid':19 'sata':16 'seen':6 'suggest':22 'test':33 'thing':2 'zfs':9"
+ranking: 0
+commentCount: 0
+upVotes: 0
+downVotes: 0
+visibility: "visible "
+apId: "https://lemmy.world/comment/6303680"
+editedAt: null
+createdAt: DateTimeImmutable @1703935757 {#4306
date: 2023-12-30 12:29:17.0 +01:00
}
} |
|
Show voter details
|
39 |
DENIED
|
edit
|
App\Entity\EntryComment {#4310
+user: App\Entity\User {#4323 …}
+entry: App\Entity\Entry {#2412
+user: Proxies\__CG__\App\Entity\User {#1970 …}
+magazine: App\Entity\Magazine {#266
+icon: Proxies\__CG__\App\Entity\Image {#247 …}
+name: "selfhosted@lemmy.world"
+title: "selfhosted"
+description: """
A place to share alternatives to popular online services that can be self-hosted without giving up privacy or locking you into a service you don’t control.\n
\n
Rules:\n
\n
- Be civil: we’re here to support and learn from one another. Insults won’t be tolerated. Flame wars are frowned upon.\n
- No spam posting.\n
- Don’t duplicate the full text of your blog or github here. Just post the link for folks to click.\n
- Submission headline should match the article title (don’t cherry-pick information from the title to fit your agenda).\n
- No trolling.\n
\n
Resources:\n
\n
- [awesome-selfhosted software](https://github.com/awesome-selfhosted/awesome-selfhosted)\n
- [awesome-sysadmin](https://github.com/awesome-foss/awesome-sysadmin) resources\n
- [Self-Hosted Podcast from Jupiter Broadcasting](https://selfhosted.show)\n
\n
> Any issues on the community? Report it using the report flag.\n
\n
> Questions? DM the mods!
"""
+rules: null
+subscriptionsCount: 1
+entryCount: 222
+entryCommentCount: 3916
+postCount: 0
+postCommentCount: 0
+isAdult: false
+customCss: null
+lastActive: DateTime @1729582735 {#276
date: 2024-10-22 09:38:55.0 +02:00
}
+markedForDeletionAt: null
+tags: null
+moderators: Doctrine\ORM\PersistentCollection {#238 …}
+ownershipRequests: Doctrine\ORM\PersistentCollection {#234 …}
+moderatorRequests: Doctrine\ORM\PersistentCollection {#223 …}
+entries: Doctrine\ORM\PersistentCollection {#181 …}
+posts: Doctrine\ORM\PersistentCollection {#139 …}
+subscriptions: Doctrine\ORM\PersistentCollection {#201 …}
+bans: Doctrine\ORM\PersistentCollection {#118 …}
+reports: Doctrine\ORM\PersistentCollection {#104 …}
+badges: Doctrine\ORM\PersistentCollection {#82 …}
+logs: Doctrine\ORM\PersistentCollection {#72 …}
+awards: Doctrine\ORM\PersistentCollection {#61 …}
+categories: Doctrine\ORM\PersistentCollection {#1820 …}
-id: 120
+apId: "selfhosted@lemmy.world"
+apProfileId: "https://lemmy.world/c/selfhosted"
+apPublicUrl: "https://lemmy.world/c/selfhosted"
+apFollowersUrl: "https://lemmy.world/c/selfhosted/followers"
+apInboxUrl: "https://lemmy.world/inbox"
+apDomain: "lemmy.world"
+apPreferredUsername: "selfhosted"
+apDiscoverable: true
+apManuallyApprovesFollowers: null
+privateKey: null
+publicKey: null
+apFetchedAt: DateTime @1703473826 {#270
date: 2023-12-25 04:10:26.0 +01:00
}
+apDeletedAt: null
+apTimeoutAt: null
+visibility: "visible "
+createdAt: DateTimeImmutable @1703473826 {#272
date: 2023-12-25 04:10:26.0 +01:00
}
}
+image: null
+domain: Proxies\__CG__\App\Entity\Domain {#1915 …}
+slug: "Question-ZFS-and-rsync"
+title: "Question - ZFS and rsync"
+url: null
+body: """
Hey fellow Selfhosters! I need some help, I think, and searching isn’t yielding what I’m hoping for.\n
\n
I recently built a new NAS for my network with 4x 18TB drives in a ZFS raidz1 pool. I previously have been using an external USB 12TB harddrive attached to a different machine.\n
\n
I’ve been attempting to use rsync to get the 12TB drive copied over to the new pool and things go great for the first 30-45 minutes. At that point, the current copy speed diminishes and 4 current files in progress sit at 100% done. Eventually, I’ve had to reboot the machine, because the zpool doesn’t appear accessible any longer. After reboot, the pool appears fine, no faults, and I can resume rsync for a while.\n
\n
EDIT: Of note, the rsync process seems to stall and I can’t get it to respect SIGINT or Ctrl+C. I can SSH in separately and running `zpool status` hangs with no output.\n
\n
While the workaround seems to be partially successful, the point of using rsync is to make it fairly hands-free and it’s been a week long process to copy the 3TB that I have now. I don’t think my zpool should be disappearing like that! Makes me nervous about the long-term viability. I don’t think I’m ready to drop down on Unraid.\n
\n
rsync is being initiated from the NAS to copy from the old server, am I better off “pushing” than “pulling”? I can’t imagine it’d make much difference.\n
\n
Could my drives be bad? How could I tell? They’re attached to a 10 port SATA card, could that be defective? How would I tell?\n
\n
Thanks for any help! I’ve dabbled in linux for a long time, but I’m far from proficient, so I don’t really know the intricacies of dmesg et al.
"""
+type: "article"
+lang: "en"
+isOc: false
+hasEmbed: false
+commentCount: 17
+favouriteCount: 36
+score: 0
+isAdult: false
+sticky: false
+lastActive: DateTime @1711558563 {#2410
date: 2024-03-27 17:56:03.0 +01:00
}
+ip: null
+adaAmount: 0
+tags: null
+mentions: null
+comments: Doctrine\ORM\PersistentCollection {#1884 …}
+votes: Doctrine\ORM\PersistentCollection {#1973 …}
+reports: Doctrine\ORM\PersistentCollection {#1959 …}
+favourites: Doctrine\ORM\PersistentCollection {#1927 …}
+notifications: Doctrine\ORM\PersistentCollection {#2442 …}
+badges: Doctrine\ORM\PersistentCollection {#2440 …}
+children: []
-id: 25403
-titleTs: "'question':1 'rsync':4 'zfs':2"
-bodyTs: "'-45':79 '10':278 '100':97 '12tb':46,63 '18tb':31 '30':78 '3tb':198 '4':90 '4x':30 'access':113 'al':320 'appear':112,120 'attach':48,275 'attempt':56 'bad':268 'better':250 'built':22 'c':152 'card':281 'copi':65,86,196,243 'could':264,270,282 'ctrl':151 'current':85,91 'd':260 'dabbl':296 'defect':285 'differ':51,263 'diminish':88 'disappear':211 'dmesg':318 'doesn':110 'done':98 'drive':32,64,266 'drop':231 'edit':132 'et':319 'eventu':99 'extern':44 'fair':183 'far':306 'fault':123 'fellow':2 'file':92 'fine':121 'first':77 'free':186 'get':61,145 'go':73 'great':74 'hand':185 'hands-fre':184 'hang':162 'harddriv':47 'help':7,293 'hey':1 'hope':18 'imagin':258 'initi':238 'intricaci':316 'isn':12 'know':314 'like':212 'linux':298 'long':193,220,301 'long-term':219 'longer':115 'm':17,228,305 'machin':52,106 'make':181,214,261 'minut':80 'much':262 'nas':25,241 'need':5 'nervous':216 'network':28 'new':24,69 'note':134 'old':246 'output':165 'partial':172 'point':83,175 'pool':37,70,119 'port':279 'previous':39 'process':137,194 'profici':308 'progress':94 'pull':254 'push':252 'raidz1':36 're':274 'readi':229 'realli':313 'reboot':104,117 'recent':21 'respect':148 'resum':127 'rsync':59,128,136,178,235 'run':159 'sata':280 'search':11 'seem':138,169 'selfhost':3 'separ':157 'server':247 'sigint':149 'sit':95 'speed':87 'ssh':155 'stall':140 'status':161 'success':173 'tell':272,289 'term':221 'thank':290 'thing':72 'think':9,206,226 'time':302 'unraid':234 'usb':45 'use':42,58,177 've':54,101,295 'viabil':222 'week':192 'workaround':168 'would':287 'yield':14 'zfs':35 'zpool':109,160,208"
+cross: false
+upVotes: 0
+downVotes: 0
+ranking: 1703862554
+visibility: "visible "
+apId: "https://lemmy.world/post/10061829"
+editedAt: DateTimeImmutable @1708796667 {#1850
date: 2024-02-24 18:44:27.0 +01:00
}
+createdAt: DateTimeImmutable @1703776154 {#2420
date: 2023-12-28 16:09:14.0 +01:00
}
}
+magazine: App\Entity\Magazine {#266}
+image: null
+parent: null
+root: null
+body: """
One thing I haven’t seen mentioned here, zfs *can* be quite finicky with some sata cards, especially raid cards.\n
\n
I suggest you connect the hard drives to the motherboard directly and test again.
"""
+lang: "en"
+isAdult: false
+favouriteCount: 1
+score: 0
+lastActive: DateTime @1711279470 {#4305
date: 2024-03-24 12:24:30.0 +01:00
}
+ip: null
+tags: null
+mentions: [
"@isles@lemmy.world"
]
+children: Doctrine\ORM\PersistentCollection {#4311 …}
+nested: Doctrine\ORM\PersistentCollection {#4313 …}
+votes: Doctrine\ORM\PersistentCollection {#4315 …}
+reports: Doctrine\ORM\PersistentCollection {#4317 …}
+favourites: Doctrine\ORM\PersistentCollection {#4319 …}
+notifications: Doctrine\ORM\PersistentCollection {#4321 …}
-id: 264496
-bodyTs: "'card':17,20 'connect':24 'direct':31 'drive':27 'especi':18 'finicki':13 'hard':26 'haven':4 'mention':7 'motherboard':30 'one':1 'quit':12 'raid':19 'sata':16 'seen':6 'suggest':22 'test':33 'thing':2 'zfs':9"
+ranking: 0
+commentCount: 0
+upVotes: 0
+downVotes: 0
+visibility: "visible "
+apId: "https://lemmy.world/comment/6303680"
+editedAt: null
+createdAt: DateTimeImmutable @1703935757 {#4306
date: 2023-12-30 12:29:17.0 +01:00
}
} |
|
Show voter details
|
40 |
DENIED
|
moderate
|
App\Entity\EntryComment {#4310
+user: App\Entity\User {#4323 …}
+entry: App\Entity\Entry {#2412
+user: Proxies\__CG__\App\Entity\User {#1970 …}
+magazine: App\Entity\Magazine {#266
+icon: Proxies\__CG__\App\Entity\Image {#247 …}
+name: "selfhosted@lemmy.world"
+title: "selfhosted"
+description: """
A place to share alternatives to popular online services that can be self-hosted without giving up privacy or locking you into a service you don’t control.\n
\n
Rules:\n
\n
- Be civil: we’re here to support and learn from one another. Insults won’t be tolerated. Flame wars are frowned upon.\n
- No spam posting.\n
- Don’t duplicate the full text of your blog or github here. Just post the link for folks to click.\n
- Submission headline should match the article title (don’t cherry-pick information from the title to fit your agenda).\n
- No trolling.\n
\n
Resources:\n
\n
- [awesome-selfhosted software](https://github.com/awesome-selfhosted/awesome-selfhosted)\n
- [awesome-sysadmin](https://github.com/awesome-foss/awesome-sysadmin) resources\n
- [Self-Hosted Podcast from Jupiter Broadcasting](https://selfhosted.show)\n
\n
> Any issues on the community? Report it using the report flag.\n
\n
> Questions? DM the mods!
"""
+rules: null
+subscriptionsCount: 1
+entryCount: 222
+entryCommentCount: 3916
+postCount: 0
+postCommentCount: 0
+isAdult: false
+customCss: null
+lastActive: DateTime @1729582735 {#276
date: 2024-10-22 09:38:55.0 +02:00
}
+markedForDeletionAt: null
+tags: null
+moderators: Doctrine\ORM\PersistentCollection {#238 …}
+ownershipRequests: Doctrine\ORM\PersistentCollection {#234 …}
+moderatorRequests: Doctrine\ORM\PersistentCollection {#223 …}
+entries: Doctrine\ORM\PersistentCollection {#181 …}
+posts: Doctrine\ORM\PersistentCollection {#139 …}
+subscriptions: Doctrine\ORM\PersistentCollection {#201 …}
+bans: Doctrine\ORM\PersistentCollection {#118 …}
+reports: Doctrine\ORM\PersistentCollection {#104 …}
+badges: Doctrine\ORM\PersistentCollection {#82 …}
+logs: Doctrine\ORM\PersistentCollection {#72 …}
+awards: Doctrine\ORM\PersistentCollection {#61 …}
+categories: Doctrine\ORM\PersistentCollection {#1820 …}
-id: 120
+apId: "selfhosted@lemmy.world"
+apProfileId: "https://lemmy.world/c/selfhosted"
+apPublicUrl: "https://lemmy.world/c/selfhosted"
+apFollowersUrl: "https://lemmy.world/c/selfhosted/followers"
+apInboxUrl: "https://lemmy.world/inbox"
+apDomain: "lemmy.world"
+apPreferredUsername: "selfhosted"
+apDiscoverable: true
+apManuallyApprovesFollowers: null
+privateKey: null
+publicKey: null
+apFetchedAt: DateTime @1703473826 {#270
date: 2023-12-25 04:10:26.0 +01:00
}
+apDeletedAt: null
+apTimeoutAt: null
+visibility: "visible "
+createdAt: DateTimeImmutable @1703473826 {#272
date: 2023-12-25 04:10:26.0 +01:00
}
}
+image: null
+domain: Proxies\__CG__\App\Entity\Domain {#1915 …}
+slug: "Question-ZFS-and-rsync"
+title: "Question - ZFS and rsync"
+url: null
+body: """
Hey fellow Selfhosters! I need some help, I think, and searching isn’t yielding what I’m hoping for.\n
\n
I recently built a new NAS for my network with 4x 18TB drives in a ZFS raidz1 pool. I previously have been using an external USB 12TB harddrive attached to a different machine.\n
\n
I’ve been attempting to use rsync to get the 12TB drive copied over to the new pool and things go great for the first 30-45 minutes. At that point, the current copy speed diminishes and 4 current files in progress sit at 100% done. Eventually, I’ve had to reboot the machine, because the zpool doesn’t appear accessible any longer. After reboot, the pool appears fine, no faults, and I can resume rsync for a while.\n
\n
EDIT: Of note, the rsync process seems to stall and I can’t get it to respect SIGINT or Ctrl+C. I can SSH in separately and running `zpool status` hangs with no output.\n
\n
While the workaround seems to be partially successful, the point of using rsync is to make it fairly hands-free and it’s been a week long process to copy the 3TB that I have now. I don’t think my zpool should be disappearing like that! Makes me nervous about the long-term viability. I don’t think I’m ready to drop down on Unraid.\n
\n
rsync is being initiated from the NAS to copy from the old server, am I better off “pushing” than “pulling”? I can’t imagine it’d make much difference.\n
\n
Could my drives be bad? How could I tell? They’re attached to a 10 port SATA card, could that be defective? How would I tell?\n
\n
Thanks for any help! I’ve dabbled in linux for a long time, but I’m far from proficient, so I don’t really know the intricacies of dmesg et al.
"""
+type: "article"
+lang: "en"
+isOc: false
+hasEmbed: false
+commentCount: 17
+favouriteCount: 36
+score: 0
+isAdult: false
+sticky: false
+lastActive: DateTime @1711558563 {#2410
date: 2024-03-27 17:56:03.0 +01:00
}
+ip: null
+adaAmount: 0
+tags: null
+mentions: null
+comments: Doctrine\ORM\PersistentCollection {#1884 …}
+votes: Doctrine\ORM\PersistentCollection {#1973 …}
+reports: Doctrine\ORM\PersistentCollection {#1959 …}
+favourites: Doctrine\ORM\PersistentCollection {#1927 …}
+notifications: Doctrine\ORM\PersistentCollection {#2442 …}
+badges: Doctrine\ORM\PersistentCollection {#2440 …}
+children: []
-id: 25403
-titleTs: "'question':1 'rsync':4 'zfs':2"
-bodyTs: "'-45':79 '10':278 '100':97 '12tb':46,63 '18tb':31 '30':78 '3tb':198 '4':90 '4x':30 'access':113 'al':320 'appear':112,120 'attach':48,275 'attempt':56 'bad':268 'better':250 'built':22 'c':152 'card':281 'copi':65,86,196,243 'could':264,270,282 'ctrl':151 'current':85,91 'd':260 'dabbl':296 'defect':285 'differ':51,263 'diminish':88 'disappear':211 'dmesg':318 'doesn':110 'done':98 'drive':32,64,266 'drop':231 'edit':132 'et':319 'eventu':99 'extern':44 'fair':183 'far':306 'fault':123 'fellow':2 'file':92 'fine':121 'first':77 'free':186 'get':61,145 'go':73 'great':74 'hand':185 'hands-fre':184 'hang':162 'harddriv':47 'help':7,293 'hey':1 'hope':18 'imagin':258 'initi':238 'intricaci':316 'isn':12 'know':314 'like':212 'linux':298 'long':193,220,301 'long-term':219 'longer':115 'm':17,228,305 'machin':52,106 'make':181,214,261 'minut':80 'much':262 'nas':25,241 'need':5 'nervous':216 'network':28 'new':24,69 'note':134 'old':246 'output':165 'partial':172 'point':83,175 'pool':37,70,119 'port':279 'previous':39 'process':137,194 'profici':308 'progress':94 'pull':254 'push':252 'raidz1':36 're':274 'readi':229 'realli':313 'reboot':104,117 'recent':21 'respect':148 'resum':127 'rsync':59,128,136,178,235 'run':159 'sata':280 'search':11 'seem':138,169 'selfhost':3 'separ':157 'server':247 'sigint':149 'sit':95 'speed':87 'ssh':155 'stall':140 'status':161 'success':173 'tell':272,289 'term':221 'thank':290 'thing':72 'think':9,206,226 'time':302 'unraid':234 'usb':45 'use':42,58,177 've':54,101,295 'viabil':222 'week':192 'workaround':168 'would':287 'yield':14 'zfs':35 'zpool':109,160,208"
+cross: false
+upVotes: 0
+downVotes: 0
+ranking: 1703862554
+visibility: "visible "
+apId: "https://lemmy.world/post/10061829"
+editedAt: DateTimeImmutable @1708796667 {#1850
date: 2024-02-24 18:44:27.0 +01:00
}
+createdAt: DateTimeImmutable @1703776154 {#2420
date: 2023-12-28 16:09:14.0 +01:00
}
}
+magazine: App\Entity\Magazine {#266}
+image: null
+parent: null
+root: null
+body: """
One thing I haven’t seen mentioned here, zfs *can* be quite finicky with some sata cards, especially raid cards.\n
\n
I suggest you connect the hard drives to the motherboard directly and test again.
"""
+lang: "en"
+isAdult: false
+favouriteCount: 1
+score: 0
+lastActive: DateTime @1711279470 {#4305
date: 2024-03-24 12:24:30.0 +01:00
}
+ip: null
+tags: null
+mentions: [
"@isles@lemmy.world"
]
+children: Doctrine\ORM\PersistentCollection {#4311 …}
+nested: Doctrine\ORM\PersistentCollection {#4313 …}
+votes: Doctrine\ORM\PersistentCollection {#4315 …}
+reports: Doctrine\ORM\PersistentCollection {#4317 …}
+favourites: Doctrine\ORM\PersistentCollection {#4319 …}
+notifications: Doctrine\ORM\PersistentCollection {#4321 …}
-id: 264496
-bodyTs: "'card':17,20 'connect':24 'direct':31 'drive':27 'especi':18 'finicki':13 'hard':26 'haven':4 'mention':7 'motherboard':30 'one':1 'quit':12 'raid':19 'sata':16 'seen':6 'suggest':22 'test':33 'thing':2 'zfs':9"
+ranking: 0
+commentCount: 0
+upVotes: 0
+downVotes: 0
+visibility: "visible "
+apId: "https://lemmy.world/comment/6303680"
+editedAt: null
+createdAt: DateTimeImmutable @1703935757 {#4306
date: 2023-12-30 12:29:17.0 +01:00
}
} |
|
Show voter details
|
41 |
DENIED
|
ROLE_USER
|
null |
|
Show voter details
|
42 |
DENIED
|
moderate
|
App\Entity\EntryComment {#4601
+user: Proxies\__CG__\App\Entity\User {#1970 …}
+entry: App\Entity\Entry {#2412
+user: Proxies\__CG__\App\Entity\User {#1970 …}
+magazine: App\Entity\Magazine {#266
+icon: Proxies\__CG__\App\Entity\Image {#247 …}
+name: "selfhosted@lemmy.world"
+title: "selfhosted"
+description: """
A place to share alternatives to popular online services that can be self-hosted without giving up privacy or locking you into a service you don’t control.\n
\n
Rules:\n
\n
- Be civil: we’re here to support and learn from one another. Insults won’t be tolerated. Flame wars are frowned upon.\n
- No spam posting.\n
- Don’t duplicate the full text of your blog or github here. Just post the link for folks to click.\n
- Submission headline should match the article title (don’t cherry-pick information from the title to fit your agenda).\n
- No trolling.\n
\n
Resources:\n
\n
- [awesome-selfhosted software](https://github.com/awesome-selfhosted/awesome-selfhosted)\n
- [awesome-sysadmin](https://github.com/awesome-foss/awesome-sysadmin) resources\n
- [Self-Hosted Podcast from Jupiter Broadcasting](https://selfhosted.show)\n
\n
> Any issues on the community? Report it using the report flag.\n
\n
> Questions? DM the mods!
"""
+rules: null
+subscriptionsCount: 1
+entryCount: 222
+entryCommentCount: 3916
+postCount: 0
+postCommentCount: 0
+isAdult: false
+customCss: null
+lastActive: DateTime @1729582735 {#276
date: 2024-10-22 09:38:55.0 +02:00
}
+markedForDeletionAt: null
+tags: null
+moderators: Doctrine\ORM\PersistentCollection {#238 …}
+ownershipRequests: Doctrine\ORM\PersistentCollection {#234 …}
+moderatorRequests: Doctrine\ORM\PersistentCollection {#223 …}
+entries: Doctrine\ORM\PersistentCollection {#181 …}
+posts: Doctrine\ORM\PersistentCollection {#139 …}
+subscriptions: Doctrine\ORM\PersistentCollection {#201 …}
+bans: Doctrine\ORM\PersistentCollection {#118 …}
+reports: Doctrine\ORM\PersistentCollection {#104 …}
+badges: Doctrine\ORM\PersistentCollection {#82 …}
+logs: Doctrine\ORM\PersistentCollection {#72 …}
+awards: Doctrine\ORM\PersistentCollection {#61 …}
+categories: Doctrine\ORM\PersistentCollection {#1820 …}
-id: 120
+apId: "selfhosted@lemmy.world"
+apProfileId: "https://lemmy.world/c/selfhosted"
+apPublicUrl: "https://lemmy.world/c/selfhosted"
+apFollowersUrl: "https://lemmy.world/c/selfhosted/followers"
+apInboxUrl: "https://lemmy.world/inbox"
+apDomain: "lemmy.world"
+apPreferredUsername: "selfhosted"
+apDiscoverable: true
+apManuallyApprovesFollowers: null
+privateKey: null
+publicKey: null
+apFetchedAt: DateTime @1703473826 {#270
date: 2023-12-25 04:10:26.0 +01:00
}
+apDeletedAt: null
+apTimeoutAt: null
+visibility: "visible "
+createdAt: DateTimeImmutable @1703473826 {#272
date: 2023-12-25 04:10:26.0 +01:00
}
}
+image: null
+domain: Proxies\__CG__\App\Entity\Domain {#1915 …}
+slug: "Question-ZFS-and-rsync"
+title: "Question - ZFS and rsync"
+url: null
+body: """
Hey fellow Selfhosters! I need some help, I think, and searching isn’t yielding what I’m hoping for.\n
\n
I recently built a new NAS for my network with 4x 18TB drives in a ZFS raidz1 pool. I previously have been using an external USB 12TB harddrive attached to a different machine.\n
\n
I’ve been attempting to use rsync to get the 12TB drive copied over to the new pool and things go great for the first 30-45 minutes. At that point, the current copy speed diminishes and 4 current files in progress sit at 100% done. Eventually, I’ve had to reboot the machine, because the zpool doesn’t appear accessible any longer. After reboot, the pool appears fine, no faults, and I can resume rsync for a while.\n
\n
EDIT: Of note, the rsync process seems to stall and I can’t get it to respect SIGINT or Ctrl+C. I can SSH in separately and running `zpool status` hangs with no output.\n
\n
While the workaround seems to be partially successful, the point of using rsync is to make it fairly hands-free and it’s been a week long process to copy the 3TB that I have now. I don’t think my zpool should be disappearing like that! Makes me nervous about the long-term viability. I don’t think I’m ready to drop down on Unraid.\n
\n
rsync is being initiated from the NAS to copy from the old server, am I better off “pushing” than “pulling”? I can’t imagine it’d make much difference.\n
\n
Could my drives be bad? How could I tell? They’re attached to a 10 port SATA card, could that be defective? How would I tell?\n
\n
Thanks for any help! I’ve dabbled in linux for a long time, but I’m far from proficient, so I don’t really know the intricacies of dmesg et al.
"""
+type: "article"
+lang: "en"
+isOc: false
+hasEmbed: false
+commentCount: 17
+favouriteCount: 36
+score: 0
+isAdult: false
+sticky: false
+lastActive: DateTime @1711558563 {#2410
date: 2024-03-27 17:56:03.0 +01:00
}
+ip: null
+adaAmount: 0
+tags: null
+mentions: null
+comments: Doctrine\ORM\PersistentCollection {#1884 …}
+votes: Doctrine\ORM\PersistentCollection {#1973 …}
+reports: Doctrine\ORM\PersistentCollection {#1959 …}
+favourites: Doctrine\ORM\PersistentCollection {#1927 …}
+notifications: Doctrine\ORM\PersistentCollection {#2442 …}
+badges: Doctrine\ORM\PersistentCollection {#2440 …}
+children: []
-id: 25403
-titleTs: "'question':1 'rsync':4 'zfs':2"
-bodyTs: "'-45':79 '10':278 '100':97 '12tb':46,63 '18tb':31 '30':78 '3tb':198 '4':90 '4x':30 'access':113 'al':320 'appear':112,120 'attach':48,275 'attempt':56 'bad':268 'better':250 'built':22 'c':152 'card':281 'copi':65,86,196,243 'could':264,270,282 'ctrl':151 'current':85,91 'd':260 'dabbl':296 'defect':285 'differ':51,263 'diminish':88 'disappear':211 'dmesg':318 'doesn':110 'done':98 'drive':32,64,266 'drop':231 'edit':132 'et':319 'eventu':99 'extern':44 'fair':183 'far':306 'fault':123 'fellow':2 'file':92 'fine':121 'first':77 'free':186 'get':61,145 'go':73 'great':74 'hand':185 'hands-fre':184 'hang':162 'harddriv':47 'help':7,293 'hey':1 'hope':18 'imagin':258 'initi':238 'intricaci':316 'isn':12 'know':314 'like':212 'linux':298 'long':193,220,301 'long-term':219 'longer':115 'm':17,228,305 'machin':52,106 'make':181,214,261 'minut':80 'much':262 'nas':25,241 'need':5 'nervous':216 'network':28 'new':24,69 'note':134 'old':246 'output':165 'partial':172 'point':83,175 'pool':37,70,119 'port':279 'previous':39 'process':137,194 'profici':308 'progress':94 'pull':254 'push':252 'raidz1':36 're':274 'readi':229 'realli':313 'reboot':104,117 'recent':21 'respect':148 'resum':127 'rsync':59,128,136,178,235 'run':159 'sata':280 'search':11 'seem':138,169 'selfhost':3 'separ':157 'server':247 'sigint':149 'sit':95 'speed':87 'ssh':155 'stall':140 'status':161 'success':173 'tell':272,289 'term':221 'thank':290 'thing':72 'think':9,206,226 'time':302 'unraid':234 'usb':45 'use':42,58,177 've':54,101,295 'viabil':222 'week':192 'workaround':168 'would':287 'yield':14 'zfs':35 'zpool':109,160,208"
+cross: false
+upVotes: 0
+downVotes: 0
+ranking: 1703862554
+visibility: "visible "
+apId: "https://lemmy.world/post/10061829"
+editedAt: DateTimeImmutable @1708796667 {#1850
date: 2024-02-24 18:44:27.0 +01:00
}
+createdAt: DateTimeImmutable @1703776154 {#2420
date: 2023-12-28 16:09:14.0 +01:00
}
}
+magazine: App\Entity\Magazine {#266}
+image: null
+parent: App\Entity\EntryComment {#4310
+user: App\Entity\User {#4323 …}
+entry: App\Entity\Entry {#2412}
+magazine: App\Entity\Magazine {#266}
+image: null
+parent: null
+root: null
+body: """
One thing I haven’t seen mentioned here, zfs *can* be quite finicky with some sata cards, especially raid cards.\n
\n
I suggest you connect the hard drives to the motherboard directly and test again.
"""
+lang: "en"
+isAdult: false
+favouriteCount: 1
+score: 0
+lastActive: DateTime @1711279470 {#4305
date: 2024-03-24 12:24:30.0 +01:00
}
+ip: null
+tags: null
+mentions: [
"@isles@lemmy.world"
]
+children: Doctrine\ORM\PersistentCollection {#4311 …}
+nested: Doctrine\ORM\PersistentCollection {#4313 …}
+votes: Doctrine\ORM\PersistentCollection {#4315 …}
+reports: Doctrine\ORM\PersistentCollection {#4317 …}
+favourites: Doctrine\ORM\PersistentCollection {#4319 …}
+notifications: Doctrine\ORM\PersistentCollection {#4321 …}
-id: 264496
-bodyTs: "'card':17,20 'connect':24 'direct':31 'drive':27 'especi':18 'finicki':13 'hard':26 'haven':4 'mention':7 'motherboard':30 'one':1 'quit':12 'raid':19 'sata':16 'seen':6 'suggest':22 'test':33 'thing':2 'zfs':9"
+ranking: 0
+commentCount: 0
+upVotes: 0
+downVotes: 0
+visibility: "visible "
+apId: "https://lemmy.world/comment/6303680"
+editedAt: null
+createdAt: DateTimeImmutable @1703935757 {#4306
date: 2023-12-30 12:29:17.0 +01:00
}
}
+root: App\Entity\EntryComment {#4310}
+body: "Thank you! I ended up connecting them directly to the main board and had the same result with rsync, eventually the zpool becomes inaccessible until reboot (ofc there may be other ways to recover it without reboot)."
+lang: "en"
+isAdult: false
+favouriteCount: 0
+score: 0
+lastActive: DateTime @1704301032 {#4599
date: 2024-01-03 17:57:12.0 +01:00
}
+ip: null
+tags: null
+mentions: [
"@isles@lemmy.world"
"@LufyCZ@lemmy.world"
]
+children: Doctrine\ORM\PersistentCollection {#4602 …}
+nested: Doctrine\ORM\PersistentCollection {#4604 …}
+votes: Doctrine\ORM\PersistentCollection {#4606 …}
+reports: Doctrine\ORM\PersistentCollection {#4608 …}
+favourites: Doctrine\ORM\PersistentCollection {#4610 …}
+notifications: Doctrine\ORM\PersistentCollection {#4612 …}
-id: 276907
-bodyTs: "'becom':23 'board':12 'connect':6 'direct':8 'end':4 'eventu':20 'inaccess':24 'main':11 'may':29 'ofc':27 'reboot':26,37 'recov':34 'result':17 'rsync':19 'thank':1 'way':32 'without':36 'zpool':22"
+ranking: 0
+commentCount: 0
+upVotes: 0
+downVotes: 0
+visibility: "visible "
+apId: "https://lemmy.world/comment/6383537"
+editedAt: null
+createdAt: DateTimeImmutable @1704301032 {#4600
date: 2024-01-03 17:57:12.0 +01:00
}
} |
|
Show voter details
|
43 |
DENIED
|
edit
|
App\Entity\EntryComment {#4601
+user: Proxies\__CG__\App\Entity\User {#1970 …}
+entry: App\Entity\Entry {#2412
+user: Proxies\__CG__\App\Entity\User {#1970 …}
+magazine: App\Entity\Magazine {#266
+icon: Proxies\__CG__\App\Entity\Image {#247 …}
+name: "selfhosted@lemmy.world"
+title: "selfhosted"
+description: """
A place to share alternatives to popular online services that can be self-hosted without giving up privacy or locking you into a service you don’t control.\n
\n
Rules:\n
\n
- Be civil: we’re here to support and learn from one another. Insults won’t be tolerated. Flame wars are frowned upon.\n
- No spam posting.\n
- Don’t duplicate the full text of your blog or github here. Just post the link for folks to click.\n
- Submission headline should match the article title (don’t cherry-pick information from the title to fit your agenda).\n
- No trolling.\n
\n
Resources:\n
\n
- [awesome-selfhosted software](https://github.com/awesome-selfhosted/awesome-selfhosted)\n
- [awesome-sysadmin](https://github.com/awesome-foss/awesome-sysadmin) resources\n
- [Self-Hosted Podcast from Jupiter Broadcasting](https://selfhosted.show)\n
\n
> Any issues on the community? Report it using the report flag.\n
\n
> Questions? DM the mods!
"""
+rules: null
+subscriptionsCount: 1
+entryCount: 222
+entryCommentCount: 3916
+postCount: 0
+postCommentCount: 0
+isAdult: false
+customCss: null
+lastActive: DateTime @1729582735 {#276
date: 2024-10-22 09:38:55.0 +02:00
}
+markedForDeletionAt: null
+tags: null
+moderators: Doctrine\ORM\PersistentCollection {#238 …}
+ownershipRequests: Doctrine\ORM\PersistentCollection {#234 …}
+moderatorRequests: Doctrine\ORM\PersistentCollection {#223 …}
+entries: Doctrine\ORM\PersistentCollection {#181 …}
+posts: Doctrine\ORM\PersistentCollection {#139 …}
+subscriptions: Doctrine\ORM\PersistentCollection {#201 …}
+bans: Doctrine\ORM\PersistentCollection {#118 …}
+reports: Doctrine\ORM\PersistentCollection {#104 …}
+badges: Doctrine\ORM\PersistentCollection {#82 …}
+logs: Doctrine\ORM\PersistentCollection {#72 …}
+awards: Doctrine\ORM\PersistentCollection {#61 …}
+categories: Doctrine\ORM\PersistentCollection {#1820 …}
-id: 120
+apId: "selfhosted@lemmy.world"
+apProfileId: "https://lemmy.world/c/selfhosted"
+apPublicUrl: "https://lemmy.world/c/selfhosted"
+apFollowersUrl: "https://lemmy.world/c/selfhosted/followers"
+apInboxUrl: "https://lemmy.world/inbox"
+apDomain: "lemmy.world"
+apPreferredUsername: "selfhosted"
+apDiscoverable: true
+apManuallyApprovesFollowers: null
+privateKey: null
+publicKey: null
+apFetchedAt: DateTime @1703473826 {#270
date: 2023-12-25 04:10:26.0 +01:00
}
+apDeletedAt: null
+apTimeoutAt: null
+visibility: "visible "
+createdAt: DateTimeImmutable @1703473826 {#272
date: 2023-12-25 04:10:26.0 +01:00
}
}
+image: null
+domain: Proxies\__CG__\App\Entity\Domain {#1915 …}
+slug: "Question-ZFS-and-rsync"
+title: "Question - ZFS and rsync"
+url: null
+body: """
Hey fellow Selfhosters! I need some help, I think, and searching isn’t yielding what I’m hoping for.\n
\n
I recently built a new NAS for my network with 4x 18TB drives in a ZFS raidz1 pool. I previously have been using an external USB 12TB harddrive attached to a different machine.\n
\n
I’ve been attempting to use rsync to get the 12TB drive copied over to the new pool and things go great for the first 30-45 minutes. At that point, the current copy speed diminishes and 4 current files in progress sit at 100% done. Eventually, I’ve had to reboot the machine, because the zpool doesn’t appear accessible any longer. After reboot, the pool appears fine, no faults, and I can resume rsync for a while.\n
\n
EDIT: Of note, the rsync process seems to stall and I can’t get it to respect SIGINT or Ctrl+C. I can SSH in separately and running `zpool status` hangs with no output.\n
\n
While the workaround seems to be partially successful, the point of using rsync is to make it fairly hands-free and it’s been a week long process to copy the 3TB that I have now. I don’t think my zpool should be disappearing like that! Makes me nervous about the long-term viability. I don’t think I’m ready to drop down on Unraid.\n
\n
rsync is being initiated from the NAS to copy from the old server, am I better off “pushing” than “pulling”? I can’t imagine it’d make much difference.\n
\n
Could my drives be bad? How could I tell? They’re attached to a 10 port SATA card, could that be defective? How would I tell?\n
\n
Thanks for any help! I’ve dabbled in linux for a long time, but I’m far from proficient, so I don’t really know the intricacies of dmesg et al.
"""
+type: "article"
+lang: "en"
+isOc: false
+hasEmbed: false
+commentCount: 17
+favouriteCount: 36
+score: 0
+isAdult: false
+sticky: false
+lastActive: DateTime @1711558563 {#2410
date: 2024-03-27 17:56:03.0 +01:00
}
+ip: null
+adaAmount: 0
+tags: null
+mentions: null
+comments: Doctrine\ORM\PersistentCollection {#1884 …}
+votes: Doctrine\ORM\PersistentCollection {#1973 …}
+reports: Doctrine\ORM\PersistentCollection {#1959 …}
+favourites: Doctrine\ORM\PersistentCollection {#1927 …}
+notifications: Doctrine\ORM\PersistentCollection {#2442 …}
+badges: Doctrine\ORM\PersistentCollection {#2440 …}
+children: []
-id: 25403
-titleTs: "'question':1 'rsync':4 'zfs':2"
-bodyTs: "'-45':79 '10':278 '100':97 '12tb':46,63 '18tb':31 '30':78 '3tb':198 '4':90 '4x':30 'access':113 'al':320 'appear':112,120 'attach':48,275 'attempt':56 'bad':268 'better':250 'built':22 'c':152 'card':281 'copi':65,86,196,243 'could':264,270,282 'ctrl':151 'current':85,91 'd':260 'dabbl':296 'defect':285 'differ':51,263 'diminish':88 'disappear':211 'dmesg':318 'doesn':110 'done':98 'drive':32,64,266 'drop':231 'edit':132 'et':319 'eventu':99 'extern':44 'fair':183 'far':306 'fault':123 'fellow':2 'file':92 'fine':121 'first':77 'free':186 'get':61,145 'go':73 'great':74 'hand':185 'hands-fre':184 'hang':162 'harddriv':47 'help':7,293 'hey':1 'hope':18 'imagin':258 'initi':238 'intricaci':316 'isn':12 'know':314 'like':212 'linux':298 'long':193,220,301 'long-term':219 'longer':115 'm':17,228,305 'machin':52,106 'make':181,214,261 'minut':80 'much':262 'nas':25,241 'need':5 'nervous':216 'network':28 'new':24,69 'note':134 'old':246 'output':165 'partial':172 'point':83,175 'pool':37,70,119 'port':279 'previous':39 'process':137,194 'profici':308 'progress':94 'pull':254 'push':252 'raidz1':36 're':274 'readi':229 'realli':313 'reboot':104,117 'recent':21 'respect':148 'resum':127 'rsync':59,128,136,178,235 'run':159 'sata':280 'search':11 'seem':138,169 'selfhost':3 'separ':157 'server':247 'sigint':149 'sit':95 'speed':87 'ssh':155 'stall':140 'status':161 'success':173 'tell':272,289 'term':221 'thank':290 'thing':72 'think':9,206,226 'time':302 'unraid':234 'usb':45 'use':42,58,177 've':54,101,295 'viabil':222 'week':192 'workaround':168 'would':287 'yield':14 'zfs':35 'zpool':109,160,208"
+cross: false
+upVotes: 0
+downVotes: 0
+ranking: 1703862554
+visibility: "visible "
+apId: "https://lemmy.world/post/10061829"
+editedAt: DateTimeImmutable @1708796667 {#1850
date: 2024-02-24 18:44:27.0 +01:00
}
+createdAt: DateTimeImmutable @1703776154 {#2420
date: 2023-12-28 16:09:14.0 +01:00
}
}
+magazine: App\Entity\Magazine {#266}
+image: null
+parent: App\Entity\EntryComment {#4310
+user: App\Entity\User {#4323 …}
+entry: App\Entity\Entry {#2412}
+magazine: App\Entity\Magazine {#266}
+image: null
+parent: null
+root: null
+body: """
One thing I haven’t seen mentioned here, zfs *can* be quite finicky with some sata cards, especially raid cards.\n
\n
I suggest you connect the hard drives to the motherboard directly and test again.
"""
+lang: "en"
+isAdult: false
+favouriteCount: 1
+score: 0
+lastActive: DateTime @1711279470 {#4305
date: 2024-03-24 12:24:30.0 +01:00
}
+ip: null
+tags: null
+mentions: [
"@isles@lemmy.world"
]
+children: Doctrine\ORM\PersistentCollection {#4311 …}
+nested: Doctrine\ORM\PersistentCollection {#4313 …}
+votes: Doctrine\ORM\PersistentCollection {#4315 …}
+reports: Doctrine\ORM\PersistentCollection {#4317 …}
+favourites: Doctrine\ORM\PersistentCollection {#4319 …}
+notifications: Doctrine\ORM\PersistentCollection {#4321 …}
-id: 264496
-bodyTs: "'card':17,20 'connect':24 'direct':31 'drive':27 'especi':18 'finicki':13 'hard':26 'haven':4 'mention':7 'motherboard':30 'one':1 'quit':12 'raid':19 'sata':16 'seen':6 'suggest':22 'test':33 'thing':2 'zfs':9"
+ranking: 0
+commentCount: 0
+upVotes: 0
+downVotes: 0
+visibility: "visible "
+apId: "https://lemmy.world/comment/6303680"
+editedAt: null
+createdAt: DateTimeImmutable @1703935757 {#4306
date: 2023-12-30 12:29:17.0 +01:00
}
}
+root: App\Entity\EntryComment {#4310}
+body: "Thank you! I ended up connecting them directly to the main board and had the same result with rsync, eventually the zpool becomes inaccessible until reboot (ofc there may be other ways to recover it without reboot)."
+lang: "en"
+isAdult: false
+favouriteCount: 0
+score: 0
+lastActive: DateTime @1704301032 {#4599
date: 2024-01-03 17:57:12.0 +01:00
}
+ip: null
+tags: null
+mentions: [
"@isles@lemmy.world"
"@LufyCZ@lemmy.world"
]
+children: Doctrine\ORM\PersistentCollection {#4602 …}
+nested: Doctrine\ORM\PersistentCollection {#4604 …}
+votes: Doctrine\ORM\PersistentCollection {#4606 …}
+reports: Doctrine\ORM\PersistentCollection {#4608 …}
+favourites: Doctrine\ORM\PersistentCollection {#4610 …}
+notifications: Doctrine\ORM\PersistentCollection {#4612 …}
-id: 276907
-bodyTs: "'becom':23 'board':12 'connect':6 'direct':8 'end':4 'eventu':20 'inaccess':24 'main':11 'may':29 'ofc':27 'reboot':26,37 'recov':34 'result':17 'rsync':19 'thank':1 'way':32 'without':36 'zpool':22"
+ranking: 0
+commentCount: 0
+upVotes: 0
+downVotes: 0
+visibility: "visible "
+apId: "https://lemmy.world/comment/6383537"
+editedAt: null
+createdAt: DateTimeImmutable @1704301032 {#4600
date: 2024-01-03 17:57:12.0 +01:00
}
} |
|
Show voter details
|
44 |
DENIED
|
moderate
|
App\Entity\EntryComment {#4601
+user: Proxies\__CG__\App\Entity\User {#1970 …}
+entry: App\Entity\Entry {#2412
+user: Proxies\__CG__\App\Entity\User {#1970 …}
+magazine: App\Entity\Magazine {#266
+icon: Proxies\__CG__\App\Entity\Image {#247 …}
+name: "selfhosted@lemmy.world"
+title: "selfhosted"
+description: """
A place to share alternatives to popular online services that can be self-hosted without giving up privacy or locking you into a service you don’t control.\n
\n
Rules:\n
\n
- Be civil: we’re here to support and learn from one another. Insults won’t be tolerated. Flame wars are frowned upon.\n
- No spam posting.\n
- Don’t duplicate the full text of your blog or github here. Just post the link for folks to click.\n
- Submission headline should match the article title (don’t cherry-pick information from the title to fit your agenda).\n
- No trolling.\n
\n
Resources:\n
\n
- [awesome-selfhosted software](https://github.com/awesome-selfhosted/awesome-selfhosted)\n
- [awesome-sysadmin](https://github.com/awesome-foss/awesome-sysadmin) resources\n
- [Self-Hosted Podcast from Jupiter Broadcasting](https://selfhosted.show)\n
\n
> Any issues on the community? Report it using the report flag.\n
\n
> Questions? DM the mods!
"""
+rules: null
+subscriptionsCount: 1
+entryCount: 222
+entryCommentCount: 3916
+postCount: 0
+postCommentCount: 0
+isAdult: false
+customCss: null
+lastActive: DateTime @1729582735 {#276
date: 2024-10-22 09:38:55.0 +02:00
}
+markedForDeletionAt: null
+tags: null
+moderators: Doctrine\ORM\PersistentCollection {#238 …}
+ownershipRequests: Doctrine\ORM\PersistentCollection {#234 …}
+moderatorRequests: Doctrine\ORM\PersistentCollection {#223 …}
+entries: Doctrine\ORM\PersistentCollection {#181 …}
+posts: Doctrine\ORM\PersistentCollection {#139 …}
+subscriptions: Doctrine\ORM\PersistentCollection {#201 …}
+bans: Doctrine\ORM\PersistentCollection {#118 …}
+reports: Doctrine\ORM\PersistentCollection {#104 …}
+badges: Doctrine\ORM\PersistentCollection {#82 …}
+logs: Doctrine\ORM\PersistentCollection {#72 …}
+awards: Doctrine\ORM\PersistentCollection {#61 …}
+categories: Doctrine\ORM\PersistentCollection {#1820 …}
-id: 120
+apId: "selfhosted@lemmy.world"
+apProfileId: "https://lemmy.world/c/selfhosted"
+apPublicUrl: "https://lemmy.world/c/selfhosted"
+apFollowersUrl: "https://lemmy.world/c/selfhosted/followers"
+apInboxUrl: "https://lemmy.world/inbox"
+apDomain: "lemmy.world"
+apPreferredUsername: "selfhosted"
+apDiscoverable: true
+apManuallyApprovesFollowers: null
+privateKey: null
+publicKey: null
+apFetchedAt: DateTime @1703473826 {#270
date: 2023-12-25 04:10:26.0 +01:00
}
+apDeletedAt: null
+apTimeoutAt: null
+visibility: "visible "
+createdAt: DateTimeImmutable @1703473826 {#272
date: 2023-12-25 04:10:26.0 +01:00
}
}
+image: null
+domain: Proxies\__CG__\App\Entity\Domain {#1915 …}
+slug: "Question-ZFS-and-rsync"
+title: "Question - ZFS and rsync"
+url: null
+body: """
Hey fellow Selfhosters! I need some help, I think, and searching isn’t yielding what I’m hoping for.\n
\n
I recently built a new NAS for my network with 4x 18TB drives in a ZFS raidz1 pool. I previously have been using an external USB 12TB harddrive attached to a different machine.\n
\n
I’ve been attempting to use rsync to get the 12TB drive copied over to the new pool and things go great for the first 30-45 minutes. At that point, the current copy speed diminishes and 4 current files in progress sit at 100% done. Eventually, I’ve had to reboot the machine, because the zpool doesn’t appear accessible any longer. After reboot, the pool appears fine, no faults, and I can resume rsync for a while.\n
\n
EDIT: Of note, the rsync process seems to stall and I can’t get it to respect SIGINT or Ctrl+C. I can SSH in separately and running `zpool status` hangs with no output.\n
\n
While the workaround seems to be partially successful, the point of using rsync is to make it fairly hands-free and it’s been a week long process to copy the 3TB that I have now. I don’t think my zpool should be disappearing like that! Makes me nervous about the long-term viability. I don’t think I’m ready to drop down on Unraid.\n
\n
rsync is being initiated from the NAS to copy from the old server, am I better off “pushing” than “pulling”? I can’t imagine it’d make much difference.\n
\n
Could my drives be bad? How could I tell? They’re attached to a 10 port SATA card, could that be defective? How would I tell?\n
\n
Thanks for any help! I’ve dabbled in linux for a long time, but I’m far from proficient, so I don’t really know the intricacies of dmesg et al.
"""
+type: "article"
+lang: "en"
+isOc: false
+hasEmbed: false
+commentCount: 17
+favouriteCount: 36
+score: 0
+isAdult: false
+sticky: false
+lastActive: DateTime @1711558563 {#2410
date: 2024-03-27 17:56:03.0 +01:00
}
+ip: null
+adaAmount: 0
+tags: null
+mentions: null
+comments: Doctrine\ORM\PersistentCollection {#1884 …}
+votes: Doctrine\ORM\PersistentCollection {#1973 …}
+reports: Doctrine\ORM\PersistentCollection {#1959 …}
+favourites: Doctrine\ORM\PersistentCollection {#1927 …}
+notifications: Doctrine\ORM\PersistentCollection {#2442 …}
+badges: Doctrine\ORM\PersistentCollection {#2440 …}
+children: []
-id: 25403
-titleTs: "'question':1 'rsync':4 'zfs':2"
-bodyTs: "'-45':79 '10':278 '100':97 '12tb':46,63 '18tb':31 '30':78 '3tb':198 '4':90 '4x':30 'access':113 'al':320 'appear':112,120 'attach':48,275 'attempt':56 'bad':268 'better':250 'built':22 'c':152 'card':281 'copi':65,86,196,243 'could':264,270,282 'ctrl':151 'current':85,91 'd':260 'dabbl':296 'defect':285 'differ':51,263 'diminish':88 'disappear':211 'dmesg':318 'doesn':110 'done':98 'drive':32,64,266 'drop':231 'edit':132 'et':319 'eventu':99 'extern':44 'fair':183 'far':306 'fault':123 'fellow':2 'file':92 'fine':121 'first':77 'free':186 'get':61,145 'go':73 'great':74 'hand':185 'hands-fre':184 'hang':162 'harddriv':47 'help':7,293 'hey':1 'hope':18 'imagin':258 'initi':238 'intricaci':316 'isn':12 'know':314 'like':212 'linux':298 'long':193,220,301 'long-term':219 'longer':115 'm':17,228,305 'machin':52,106 'make':181,214,261 'minut':80 'much':262 'nas':25,241 'need':5 'nervous':216 'network':28 'new':24,69 'note':134 'old':246 'output':165 'partial':172 'point':83,175 'pool':37,70,119 'port':279 'previous':39 'process':137,194 'profici':308 'progress':94 'pull':254 'push':252 'raidz1':36 're':274 'readi':229 'realli':313 'reboot':104,117 'recent':21 'respect':148 'resum':127 'rsync':59,128,136,178,235 'run':159 'sata':280 'search':11 'seem':138,169 'selfhost':3 'separ':157 'server':247 'sigint':149 'sit':95 'speed':87 'ssh':155 'stall':140 'status':161 'success':173 'tell':272,289 'term':221 'thank':290 'thing':72 'think':9,206,226 'time':302 'unraid':234 'usb':45 'use':42,58,177 've':54,101,295 'viabil':222 'week':192 'workaround':168 'would':287 'yield':14 'zfs':35 'zpool':109,160,208"
+cross: false
+upVotes: 0
+downVotes: 0
+ranking: 1703862554
+visibility: "visible "
+apId: "https://lemmy.world/post/10061829"
+editedAt: DateTimeImmutable @1708796667 {#1850
date: 2024-02-24 18:44:27.0 +01:00
}
+createdAt: DateTimeImmutable @1703776154 {#2420
date: 2023-12-28 16:09:14.0 +01:00
}
}
+magazine: App\Entity\Magazine {#266}
+image: null
+parent: App\Entity\EntryComment {#4310
+user: App\Entity\User {#4323 …}
+entry: App\Entity\Entry {#2412}
+magazine: App\Entity\Magazine {#266}
+image: null
+parent: null
+root: null
+body: """
One thing I haven’t seen mentioned here, zfs *can* be quite finicky with some sata cards, especially raid cards.\n
\n
I suggest you connect the hard drives to the motherboard directly and test again.
"""
+lang: "en"
+isAdult: false
+favouriteCount: 1
+score: 0
+lastActive: DateTime @1711279470 {#4305
date: 2024-03-24 12:24:30.0 +01:00
}
+ip: null
+tags: null
+mentions: [
"@isles@lemmy.world"
]
+children: Doctrine\ORM\PersistentCollection {#4311 …}
+nested: Doctrine\ORM\PersistentCollection {#4313 …}
+votes: Doctrine\ORM\PersistentCollection {#4315 …}
+reports: Doctrine\ORM\PersistentCollection {#4317 …}
+favourites: Doctrine\ORM\PersistentCollection {#4319 …}
+notifications: Doctrine\ORM\PersistentCollection {#4321 …}
-id: 264496
-bodyTs: "'card':17,20 'connect':24 'direct':31 'drive':27 'especi':18 'finicki':13 'hard':26 'haven':4 'mention':7 'motherboard':30 'one':1 'quit':12 'raid':19 'sata':16 'seen':6 'suggest':22 'test':33 'thing':2 'zfs':9"
+ranking: 0
+commentCount: 0
+upVotes: 0
+downVotes: 0
+visibility: "visible "
+apId: "https://lemmy.world/comment/6303680"
+editedAt: null
+createdAt: DateTimeImmutable @1703935757 {#4306
date: 2023-12-30 12:29:17.0 +01:00
}
}
+root: App\Entity\EntryComment {#4310}
+body: "Thank you! I ended up connecting them directly to the main board and had the same result with rsync, eventually the zpool becomes inaccessible until reboot (ofc there may be other ways to recover it without reboot)."
+lang: "en"
+isAdult: false
+favouriteCount: 0
+score: 0
+lastActive: DateTime @1704301032 {#4599
date: 2024-01-03 17:57:12.0 +01:00
}
+ip: null
+tags: null
+mentions: [
"@isles@lemmy.world"
"@LufyCZ@lemmy.world"
]
+children: Doctrine\ORM\PersistentCollection {#4602 …}
+nested: Doctrine\ORM\PersistentCollection {#4604 …}
+votes: Doctrine\ORM\PersistentCollection {#4606 …}
+reports: Doctrine\ORM\PersistentCollection {#4608 …}
+favourites: Doctrine\ORM\PersistentCollection {#4610 …}
+notifications: Doctrine\ORM\PersistentCollection {#4612 …}
-id: 276907
-bodyTs: "'becom':23 'board':12 'connect':6 'direct':8 'end':4 'eventu':20 'inaccess':24 'main':11 'may':29 'ofc':27 'reboot':26,37 'recov':34 'result':17 'rsync':19 'thank':1 'way':32 'without':36 'zpool':22"
+ranking: 0
+commentCount: 0
+upVotes: 0
+downVotes: 0
+visibility: "visible "
+apId: "https://lemmy.world/comment/6383537"
+editedAt: null
+createdAt: DateTimeImmutable @1704301032 {#4600
date: 2024-01-03 17:57:12.0 +01:00
}
} |
|
Show voter details
|
45 |
DENIED
|
ROLE_USER
|
null |
|
Show voter details
|
46 |
DENIED
|
moderate
|
App\Entity\EntryComment {#4383
+user: App\Entity\User {#4396 …}
+entry: App\Entity\Entry {#2412
+user: Proxies\__CG__\App\Entity\User {#1970 …}
+magazine: App\Entity\Magazine {#266
+icon: Proxies\__CG__\App\Entity\Image {#247 …}
+name: "selfhosted@lemmy.world"
+title: "selfhosted"
+description: """
A place to share alternatives to popular online services that can be self-hosted without giving up privacy or locking you into a service you don’t control.\n
\n
Rules:\n
\n
- Be civil: we’re here to support and learn from one another. Insults won’t be tolerated. Flame wars are frowned upon.\n
- No spam posting.\n
- Don’t duplicate the full text of your blog or github here. Just post the link for folks to click.\n
- Submission headline should match the article title (don’t cherry-pick information from the title to fit your agenda).\n
- No trolling.\n
\n
Resources:\n
\n
- [awesome-selfhosted software](https://github.com/awesome-selfhosted/awesome-selfhosted)\n
- [awesome-sysadmin](https://github.com/awesome-foss/awesome-sysadmin) resources\n
- [Self-Hosted Podcast from Jupiter Broadcasting](https://selfhosted.show)\n
\n
> Any issues on the community? Report it using the report flag.\n
\n
> Questions? DM the mods!
"""
+rules: null
+subscriptionsCount: 1
+entryCount: 222
+entryCommentCount: 3916
+postCount: 0
+postCommentCount: 0
+isAdult: false
+customCss: null
+lastActive: DateTime @1729582735 {#276
date: 2024-10-22 09:38:55.0 +02:00
}
+markedForDeletionAt: null
+tags: null
+moderators: Doctrine\ORM\PersistentCollection {#238 …}
+ownershipRequests: Doctrine\ORM\PersistentCollection {#234 …}
+moderatorRequests: Doctrine\ORM\PersistentCollection {#223 …}
+entries: Doctrine\ORM\PersistentCollection {#181 …}
+posts: Doctrine\ORM\PersistentCollection {#139 …}
+subscriptions: Doctrine\ORM\PersistentCollection {#201 …}
+bans: Doctrine\ORM\PersistentCollection {#118 …}
+reports: Doctrine\ORM\PersistentCollection {#104 …}
+badges: Doctrine\ORM\PersistentCollection {#82 …}
+logs: Doctrine\ORM\PersistentCollection {#72 …}
+awards: Doctrine\ORM\PersistentCollection {#61 …}
+categories: Doctrine\ORM\PersistentCollection {#1820 …}
-id: 120
+apId: "selfhosted@lemmy.world"
+apProfileId: "https://lemmy.world/c/selfhosted"
+apPublicUrl: "https://lemmy.world/c/selfhosted"
+apFollowersUrl: "https://lemmy.world/c/selfhosted/followers"
+apInboxUrl: "https://lemmy.world/inbox"
+apDomain: "lemmy.world"
+apPreferredUsername: "selfhosted"
+apDiscoverable: true
+apManuallyApprovesFollowers: null
+privateKey: null
+publicKey: null
+apFetchedAt: DateTime @1703473826 {#270
date: 2023-12-25 04:10:26.0 +01:00
}
+apDeletedAt: null
+apTimeoutAt: null
+visibility: "visible "
+createdAt: DateTimeImmutable @1703473826 {#272
date: 2023-12-25 04:10:26.0 +01:00
}
}
+image: null
+domain: Proxies\__CG__\App\Entity\Domain {#1915 …}
+slug: "Question-ZFS-and-rsync"
+title: "Question - ZFS and rsync"
+url: null
+body: """
Hey fellow Selfhosters! I need some help, I think, and searching isn’t yielding what I’m hoping for.\n
\n
I recently built a new NAS for my network with 4x 18TB drives in a ZFS raidz1 pool. I previously have been using an external USB 12TB harddrive attached to a different machine.\n
\n
I’ve been attempting to use rsync to get the 12TB drive copied over to the new pool and things go great for the first 30-45 minutes. At that point, the current copy speed diminishes and 4 current files in progress sit at 100% done. Eventually, I’ve had to reboot the machine, because the zpool doesn’t appear accessible any longer. After reboot, the pool appears fine, no faults, and I can resume rsync for a while.\n
\n
EDIT: Of note, the rsync process seems to stall and I can’t get it to respect SIGINT or Ctrl+C. I can SSH in separately and running `zpool status` hangs with no output.\n
\n
While the workaround seems to be partially successful, the point of using rsync is to make it fairly hands-free and it’s been a week long process to copy the 3TB that I have now. I don’t think my zpool should be disappearing like that! Makes me nervous about the long-term viability. I don’t think I’m ready to drop down on Unraid.\n
\n
rsync is being initiated from the NAS to copy from the old server, am I better off “pushing” than “pulling”? I can’t imagine it’d make much difference.\n
\n
Could my drives be bad? How could I tell? They’re attached to a 10 port SATA card, could that be defective? How would I tell?\n
\n
Thanks for any help! I’ve dabbled in linux for a long time, but I’m far from proficient, so I don’t really know the intricacies of dmesg et al.
"""
+type: "article"
+lang: "en"
+isOc: false
+hasEmbed: false
+commentCount: 17
+favouriteCount: 36
+score: 0
+isAdult: false
+sticky: false
+lastActive: DateTime @1711558563 {#2410
date: 2024-03-27 17:56:03.0 +01:00
}
+ip: null
+adaAmount: 0
+tags: null
+mentions: null
+comments: Doctrine\ORM\PersistentCollection {#1884 …}
+votes: Doctrine\ORM\PersistentCollection {#1973 …}
+reports: Doctrine\ORM\PersistentCollection {#1959 …}
+favourites: Doctrine\ORM\PersistentCollection {#1927 …}
+notifications: Doctrine\ORM\PersistentCollection {#2442 …}
+badges: Doctrine\ORM\PersistentCollection {#2440 …}
+children: []
-id: 25403
-titleTs: "'question':1 'rsync':4 'zfs':2"
-bodyTs: "'-45':79 '10':278 '100':97 '12tb':46,63 '18tb':31 '30':78 '3tb':198 '4':90 '4x':30 'access':113 'al':320 'appear':112,120 'attach':48,275 'attempt':56 'bad':268 'better':250 'built':22 'c':152 'card':281 'copi':65,86,196,243 'could':264,270,282 'ctrl':151 'current':85,91 'd':260 'dabbl':296 'defect':285 'differ':51,263 'diminish':88 'disappear':211 'dmesg':318 'doesn':110 'done':98 'drive':32,64,266 'drop':231 'edit':132 'et':319 'eventu':99 'extern':44 'fair':183 'far':306 'fault':123 'fellow':2 'file':92 'fine':121 'first':77 'free':186 'get':61,145 'go':73 'great':74 'hand':185 'hands-fre':184 'hang':162 'harddriv':47 'help':7,293 'hey':1 'hope':18 'imagin':258 'initi':238 'intricaci':316 'isn':12 'know':314 'like':212 'linux':298 'long':193,220,301 'long-term':219 'longer':115 'm':17,228,305 'machin':52,106 'make':181,214,261 'minut':80 'much':262 'nas':25,241 'need':5 'nervous':216 'network':28 'new':24,69 'note':134 'old':246 'output':165 'partial':172 'point':83,175 'pool':37,70,119 'port':279 'previous':39 'process':137,194 'profici':308 'progress':94 'pull':254 'push':252 'raidz1':36 're':274 'readi':229 'realli':313 'reboot':104,117 'recent':21 'respect':148 'resum':127 'rsync':59,128,136,178,235 'run':159 'sata':280 'search':11 'seem':138,169 'selfhost':3 'separ':157 'server':247 'sigint':149 'sit':95 'speed':87 'ssh':155 'stall':140 'status':161 'success':173 'tell':272,289 'term':221 'thank':290 'thing':72 'think':9,206,226 'time':302 'unraid':234 'usb':45 'use':42,58,177 've':54,101,295 'viabil':222 'week':192 'workaround':168 'would':287 'yield':14 'zfs':35 'zpool':109,160,208"
+cross: false
+upVotes: 0
+downVotes: 0
+ranking: 1703862554
+visibility: "visible "
+apId: "https://lemmy.world/post/10061829"
+editedAt: DateTimeImmutable @1708796667 {#1850
date: 2024-02-24 18:44:27.0 +01:00
}
+createdAt: DateTimeImmutable @1703776154 {#2420
date: 2023-12-28 16:09:14.0 +01:00
}
}
+magazine: App\Entity\Magazine {#266}
+image: null
+parent: null
+root: null
+body: "When things lock up, will a kill -9 kill rsync or not? If it doesn’t, and the zpool status lockup is suspicious, it means things are stuck inside a system call. I’ve seen all sorts of horrible things with usb timeouts. Check your syslog."
+lang: "en"
+isAdult: false
+favouriteCount: 1
+score: 0
+lastActive: DateTime @1711558563 {#4378
date: 2024-03-27 17:56:03.0 +01:00
}
+ip: null
+tags: null
+mentions: [
"@isles@lemmy.world"
]
+children: Doctrine\ORM\PersistentCollection {#4384 …}
+nested: Doctrine\ORM\PersistentCollection {#4386 …}
+votes: Doctrine\ORM\PersistentCollection {#4388 …}
+reports: Doctrine\ORM\PersistentCollection {#4390 …}
+favourites: Doctrine\ORM\PersistentCollection {#4392 …}
+notifications: Doctrine\ORM\PersistentCollection {#4394 …}
-id: 272605
-bodyTs: "'-9':8 'call':32 'check':44 'doesn':15 'horribl':39 'insid':29 'kill':7,9 'lock':3 'lockup':21 'mean':25 'rsync':10 'seen':35 'sort':37 'status':20 'stuck':28 'suspici':23 'syslog':46 'system':31 'thing':2,26,40 'timeout':43 'usb':42 've':34 'zpool':19"
+ranking: 0
+commentCount: 0
+upVotes: 0
+downVotes: 0
+visibility: "visible "
+apId: "https://lemm.ee/comment/7677822"
+editedAt: null
+createdAt: DateTimeImmutable @1703816503 {#4379
date: 2023-12-29 03:21:43.0 +01:00
}
} |
|
Show voter details
|
47 |
DENIED
|
edit
|
App\Entity\EntryComment {#4383
+user: App\Entity\User {#4396 …}
+entry: App\Entity\Entry {#2412
+user: Proxies\__CG__\App\Entity\User {#1970 …}
+magazine: App\Entity\Magazine {#266
+icon: Proxies\__CG__\App\Entity\Image {#247 …}
+name: "selfhosted@lemmy.world"
+title: "selfhosted"
+description: """
A place to share alternatives to popular online services that can be self-hosted without giving up privacy or locking you into a service you don’t control.\n
\n
Rules:\n
\n
- Be civil: we’re here to support and learn from one another. Insults won’t be tolerated. Flame wars are frowned upon.\n
- No spam posting.\n
- Don’t duplicate the full text of your blog or github here. Just post the link for folks to click.\n
- Submission headline should match the article title (don’t cherry-pick information from the title to fit your agenda).\n
- No trolling.\n
\n
Resources:\n
\n
- [awesome-selfhosted software](https://github.com/awesome-selfhosted/awesome-selfhosted)\n
- [awesome-sysadmin](https://github.com/awesome-foss/awesome-sysadmin) resources\n
- [Self-Hosted Podcast from Jupiter Broadcasting](https://selfhosted.show)\n
\n
> Any issues on the community? Report it using the report flag.\n
\n
> Questions? DM the mods!
"""
+rules: null
+subscriptionsCount: 1
+entryCount: 222
+entryCommentCount: 3916
+postCount: 0
+postCommentCount: 0
+isAdult: false
+customCss: null
+lastActive: DateTime @1729582735 {#276
date: 2024-10-22 09:38:55.0 +02:00
}
+markedForDeletionAt: null
+tags: null
+moderators: Doctrine\ORM\PersistentCollection {#238 …}
+ownershipRequests: Doctrine\ORM\PersistentCollection {#234 …}
+moderatorRequests: Doctrine\ORM\PersistentCollection {#223 …}
+entries: Doctrine\ORM\PersistentCollection {#181 …}
+posts: Doctrine\ORM\PersistentCollection {#139 …}
+subscriptions: Doctrine\ORM\PersistentCollection {#201 …}
+bans: Doctrine\ORM\PersistentCollection {#118 …}
+reports: Doctrine\ORM\PersistentCollection {#104 …}
+badges: Doctrine\ORM\PersistentCollection {#82 …}
+logs: Doctrine\ORM\PersistentCollection {#72 …}
+awards: Doctrine\ORM\PersistentCollection {#61 …}
+categories: Doctrine\ORM\PersistentCollection {#1820 …}
-id: 120
+apId: "selfhosted@lemmy.world"
+apProfileId: "https://lemmy.world/c/selfhosted"
+apPublicUrl: "https://lemmy.world/c/selfhosted"
+apFollowersUrl: "https://lemmy.world/c/selfhosted/followers"
+apInboxUrl: "https://lemmy.world/inbox"
+apDomain: "lemmy.world"
+apPreferredUsername: "selfhosted"
+apDiscoverable: true
+apManuallyApprovesFollowers: null
+privateKey: null
+publicKey: null
+apFetchedAt: DateTime @1703473826 {#270
date: 2023-12-25 04:10:26.0 +01:00
}
+apDeletedAt: null
+apTimeoutAt: null
+visibility: "visible "
+createdAt: DateTimeImmutable @1703473826 {#272
date: 2023-12-25 04:10:26.0 +01:00
}
}
+image: null
+domain: Proxies\__CG__\App\Entity\Domain {#1915 …}
+slug: "Question-ZFS-and-rsync"
+title: "Question - ZFS and rsync"
+url: null
+body: """
Hey fellow Selfhosters! I need some help, I think, and searching isn’t yielding what I’m hoping for.\n
\n
I recently built a new NAS for my network with 4x 18TB drives in a ZFS raidz1 pool. I previously have been using an external USB 12TB harddrive attached to a different machine.\n
\n
I’ve been attempting to use rsync to get the 12TB drive copied over to the new pool and things go great for the first 30-45 minutes. At that point, the current copy speed diminishes and 4 current files in progress sit at 100% done. Eventually, I’ve had to reboot the machine, because the zpool doesn’t appear accessible any longer. After reboot, the pool appears fine, no faults, and I can resume rsync for a while.\n
\n
EDIT: Of note, the rsync process seems to stall and I can’t get it to respect SIGINT or Ctrl+C. I can SSH in separately and running `zpool status` hangs with no output.\n
\n
While the workaround seems to be partially successful, the point of using rsync is to make it fairly hands-free and it’s been a week long process to copy the 3TB that I have now. I don’t think my zpool should be disappearing like that! Makes me nervous about the long-term viability. I don’t think I’m ready to drop down on Unraid.\n
\n
rsync is being initiated from the NAS to copy from the old server, am I better off “pushing” than “pulling”? I can’t imagine it’d make much difference.\n
\n
Could my drives be bad? How could I tell? They’re attached to a 10 port SATA card, could that be defective? How would I tell?\n
\n
Thanks for any help! I’ve dabbled in linux for a long time, but I’m far from proficient, so I don’t really know the intricacies of dmesg et al.
"""
+type: "article"
+lang: "en"
+isOc: false
+hasEmbed: false
+commentCount: 17
+favouriteCount: 36
+score: 0
+isAdult: false
+sticky: false
+lastActive: DateTime @1711558563 {#2410
date: 2024-03-27 17:56:03.0 +01:00
}
+ip: null
+adaAmount: 0
+tags: null
+mentions: null
+comments: Doctrine\ORM\PersistentCollection {#1884 …}
+votes: Doctrine\ORM\PersistentCollection {#1973 …}
+reports: Doctrine\ORM\PersistentCollection {#1959 …}
+favourites: Doctrine\ORM\PersistentCollection {#1927 …}
+notifications: Doctrine\ORM\PersistentCollection {#2442 …}
+badges: Doctrine\ORM\PersistentCollection {#2440 …}
+children: []
-id: 25403
-titleTs: "'question':1 'rsync':4 'zfs':2"
-bodyTs: "'-45':79 '10':278 '100':97 '12tb':46,63 '18tb':31 '30':78 '3tb':198 '4':90 '4x':30 'access':113 'al':320 'appear':112,120 'attach':48,275 'attempt':56 'bad':268 'better':250 'built':22 'c':152 'card':281 'copi':65,86,196,243 'could':264,270,282 'ctrl':151 'current':85,91 'd':260 'dabbl':296 'defect':285 'differ':51,263 'diminish':88 'disappear':211 'dmesg':318 'doesn':110 'done':98 'drive':32,64,266 'drop':231 'edit':132 'et':319 'eventu':99 'extern':44 'fair':183 'far':306 'fault':123 'fellow':2 'file':92 'fine':121 'first':77 'free':186 'get':61,145 'go':73 'great':74 'hand':185 'hands-fre':184 'hang':162 'harddriv':47 'help':7,293 'hey':1 'hope':18 'imagin':258 'initi':238 'intricaci':316 'isn':12 'know':314 'like':212 'linux':298 'long':193,220,301 'long-term':219 'longer':115 'm':17,228,305 'machin':52,106 'make':181,214,261 'minut':80 'much':262 'nas':25,241 'need':5 'nervous':216 'network':28 'new':24,69 'note':134 'old':246 'output':165 'partial':172 'point':83,175 'pool':37,70,119 'port':279 'previous':39 'process':137,194 'profici':308 'progress':94 'pull':254 'push':252 'raidz1':36 're':274 'readi':229 'realli':313 'reboot':104,117 'recent':21 'respect':148 'resum':127 'rsync':59,128,136,178,235 'run':159 'sata':280 'search':11 'seem':138,169 'selfhost':3 'separ':157 'server':247 'sigint':149 'sit':95 'speed':87 'ssh':155 'stall':140 'status':161 'success':173 'tell':272,289 'term':221 'thank':290 'thing':72 'think':9,206,226 'time':302 'unraid':234 'usb':45 'use':42,58,177 've':54,101,295 'viabil':222 'week':192 'workaround':168 'would':287 'yield':14 'zfs':35 'zpool':109,160,208"
+cross: false
+upVotes: 0
+downVotes: 0
+ranking: 1703862554
+visibility: "visible "
+apId: "https://lemmy.world/post/10061829"
+editedAt: DateTimeImmutable @1708796667 {#1850
date: 2024-02-24 18:44:27.0 +01:00
}
+createdAt: DateTimeImmutable @1703776154 {#2420
date: 2023-12-28 16:09:14.0 +01:00
}
}
+magazine: App\Entity\Magazine {#266}
+image: null
+parent: null
+root: null
+body: "When things lock up, will a kill -9 kill rsync or not? If it doesn’t, and the zpool status lockup is suspicious, it means things are stuck inside a system call. I’ve seen all sorts of horrible things with usb timeouts. Check your syslog."
+lang: "en"
+isAdult: false
+favouriteCount: 1
+score: 0
+lastActive: DateTime @1711558563 {#4378
date: 2024-03-27 17:56:03.0 +01:00
}
+ip: null
+tags: null
+mentions: [
"@isles@lemmy.world"
]
+children: Doctrine\ORM\PersistentCollection {#4384 …}
+nested: Doctrine\ORM\PersistentCollection {#4386 …}
+votes: Doctrine\ORM\PersistentCollection {#4388 …}
+reports: Doctrine\ORM\PersistentCollection {#4390 …}
+favourites: Doctrine\ORM\PersistentCollection {#4392 …}
+notifications: Doctrine\ORM\PersistentCollection {#4394 …}
-id: 272605
-bodyTs: "'-9':8 'call':32 'check':44 'doesn':15 'horribl':39 'insid':29 'kill':7,9 'lock':3 'lockup':21 'mean':25 'rsync':10 'seen':35 'sort':37 'status':20 'stuck':28 'suspici':23 'syslog':46 'system':31 'thing':2,26,40 'timeout':43 'usb':42 've':34 'zpool':19"
+ranking: 0
+commentCount: 0
+upVotes: 0
+downVotes: 0
+visibility: "visible "
+apId: "https://lemm.ee/comment/7677822"
+editedAt: null
+createdAt: DateTimeImmutable @1703816503 {#4379
date: 2023-12-29 03:21:43.0 +01:00
}
} |
|
Show voter details
|
48 |
DENIED
|
moderate
|
App\Entity\EntryComment {#4383
+user: App\Entity\User {#4396 …}
+entry: App\Entity\Entry {#2412
+user: Proxies\__CG__\App\Entity\User {#1970 …}
+magazine: App\Entity\Magazine {#266
+icon: Proxies\__CG__\App\Entity\Image {#247 …}
+name: "selfhosted@lemmy.world"
+title: "selfhosted"
+description: """
A place to share alternatives to popular online services that can be self-hosted without giving up privacy or locking you into a service you don’t control.\n
\n
Rules:\n
\n
- Be civil: we’re here to support and learn from one another. Insults won’t be tolerated. Flame wars are frowned upon.\n
- No spam posting.\n
- Don’t duplicate the full text of your blog or github here. Just post the link for folks to click.\n
- Submission headline should match the article title (don’t cherry-pick information from the title to fit your agenda).\n
- No trolling.\n
\n
Resources:\n
\n
- [awesome-selfhosted software](https://github.com/awesome-selfhosted/awesome-selfhosted)\n
- [awesome-sysadmin](https://github.com/awesome-foss/awesome-sysadmin) resources\n
- [Self-Hosted Podcast from Jupiter Broadcasting](https://selfhosted.show)\n
\n
> Any issues on the community? Report it using the report flag.\n
\n
> Questions? DM the mods!
"""
+rules: null
+subscriptionsCount: 1
+entryCount: 222
+entryCommentCount: 3916
+postCount: 0
+postCommentCount: 0
+isAdult: false
+customCss: null
+lastActive: DateTime @1729582735 {#276
date: 2024-10-22 09:38:55.0 +02:00
}
+markedForDeletionAt: null
+tags: null
+moderators: Doctrine\ORM\PersistentCollection {#238 …}
+ownershipRequests: Doctrine\ORM\PersistentCollection {#234 …}
+moderatorRequests: Doctrine\ORM\PersistentCollection {#223 …}
+entries: Doctrine\ORM\PersistentCollection {#181 …}
+posts: Doctrine\ORM\PersistentCollection {#139 …}
+subscriptions: Doctrine\ORM\PersistentCollection {#201 …}
+bans: Doctrine\ORM\PersistentCollection {#118 …}
+reports: Doctrine\ORM\PersistentCollection {#104 …}
+badges: Doctrine\ORM\PersistentCollection {#82 …}
+logs: Doctrine\ORM\PersistentCollection {#72 …}
+awards: Doctrine\ORM\PersistentCollection {#61 …}
+categories: Doctrine\ORM\PersistentCollection {#1820 …}
-id: 120
+apId: "selfhosted@lemmy.world"
+apProfileId: "https://lemmy.world/c/selfhosted"
+apPublicUrl: "https://lemmy.world/c/selfhosted"
+apFollowersUrl: "https://lemmy.world/c/selfhosted/followers"
+apInboxUrl: "https://lemmy.world/inbox"
+apDomain: "lemmy.world"
+apPreferredUsername: "selfhosted"
+apDiscoverable: true
+apManuallyApprovesFollowers: null
+privateKey: null
+publicKey: null
+apFetchedAt: DateTime @1703473826 {#270
date: 2023-12-25 04:10:26.0 +01:00
}
+apDeletedAt: null
+apTimeoutAt: null
+visibility: "visible "
+createdAt: DateTimeImmutable @1703473826 {#272
date: 2023-12-25 04:10:26.0 +01:00
}
}
+image: null
+domain: Proxies\__CG__\App\Entity\Domain {#1915 …}
+slug: "Question-ZFS-and-rsync"
+title: "Question - ZFS and rsync"
+url: null
+body: """
Hey fellow Selfhosters! I need some help, I think, and searching isn’t yielding what I’m hoping for.\n
\n
I recently built a new NAS for my network with 4x 18TB drives in a ZFS raidz1 pool. I previously have been using an external USB 12TB harddrive attached to a different machine.\n
\n
I’ve been attempting to use rsync to get the 12TB drive copied over to the new pool and things go great for the first 30-45 minutes. At that point, the current copy speed diminishes and 4 current files in progress sit at 100% done. Eventually, I’ve had to reboot the machine, because the zpool doesn’t appear accessible any longer. After reboot, the pool appears fine, no faults, and I can resume rsync for a while.\n
\n
EDIT: Of note, the rsync process seems to stall and I can’t get it to respect SIGINT or Ctrl+C. I can SSH in separately and running `zpool status` hangs with no output.\n
\n
While the workaround seems to be partially successful, the point of using rsync is to make it fairly hands-free and it’s been a week long process to copy the 3TB that I have now. I don’t think my zpool should be disappearing like that! Makes me nervous about the long-term viability. I don’t think I’m ready to drop down on Unraid.\n
\n
rsync is being initiated from the NAS to copy from the old server, am I better off “pushing” than “pulling”? I can’t imagine it’d make much difference.\n
\n
Could my drives be bad? How could I tell? They’re attached to a 10 port SATA card, could that be defective? How would I tell?\n
\n
Thanks for any help! I’ve dabbled in linux for a long time, but I’m far from proficient, so I don’t really know the intricacies of dmesg et al.
"""
+type: "article"
+lang: "en"
+isOc: false
+hasEmbed: false
+commentCount: 17
+favouriteCount: 36
+score: 0
+isAdult: false
+sticky: false
+lastActive: DateTime @1711558563 {#2410
date: 2024-03-27 17:56:03.0 +01:00
}
+ip: null
+adaAmount: 0
+tags: null
+mentions: null
+comments: Doctrine\ORM\PersistentCollection {#1884 …}
+votes: Doctrine\ORM\PersistentCollection {#1973 …}
+reports: Doctrine\ORM\PersistentCollection {#1959 …}
+favourites: Doctrine\ORM\PersistentCollection {#1927 …}
+notifications: Doctrine\ORM\PersistentCollection {#2442 …}
+badges: Doctrine\ORM\PersistentCollection {#2440 …}
+children: []
-id: 25403
-titleTs: "'question':1 'rsync':4 'zfs':2"
-bodyTs: "'-45':79 '10':278 '100':97 '12tb':46,63 '18tb':31 '30':78 '3tb':198 '4':90 '4x':30 'access':113 'al':320 'appear':112,120 'attach':48,275 'attempt':56 'bad':268 'better':250 'built':22 'c':152 'card':281 'copi':65,86,196,243 'could':264,270,282 'ctrl':151 'current':85,91 'd':260 'dabbl':296 'defect':285 'differ':51,263 'diminish':88 'disappear':211 'dmesg':318 'doesn':110 'done':98 'drive':32,64,266 'drop':231 'edit':132 'et':319 'eventu':99 'extern':44 'fair':183 'far':306 'fault':123 'fellow':2 'file':92 'fine':121 'first':77 'free':186 'get':61,145 'go':73 'great':74 'hand':185 'hands-fre':184 'hang':162 'harddriv':47 'help':7,293 'hey':1 'hope':18 'imagin':258 'initi':238 'intricaci':316 'isn':12 'know':314 'like':212 'linux':298 'long':193,220,301 'long-term':219 'longer':115 'm':17,228,305 'machin':52,106 'make':181,214,261 'minut':80 'much':262 'nas':25,241 'need':5 'nervous':216 'network':28 'new':24,69 'note':134 'old':246 'output':165 'partial':172 'point':83,175 'pool':37,70,119 'port':279 'previous':39 'process':137,194 'profici':308 'progress':94 'pull':254 'push':252 'raidz1':36 're':274 'readi':229 'realli':313 'reboot':104,117 'recent':21 'respect':148 'resum':127 'rsync':59,128,136,178,235 'run':159 'sata':280 'search':11 'seem':138,169 'selfhost':3 'separ':157 'server':247 'sigint':149 'sit':95 'speed':87 'ssh':155 'stall':140 'status':161 'success':173 'tell':272,289 'term':221 'thank':290 'thing':72 'think':9,206,226 'time':302 'unraid':234 'usb':45 'use':42,58,177 've':54,101,295 'viabil':222 'week':192 'workaround':168 'would':287 'yield':14 'zfs':35 'zpool':109,160,208"
+cross: false
+upVotes: 0
+downVotes: 0
+ranking: 1703862554
+visibility: "visible "
+apId: "https://lemmy.world/post/10061829"
+editedAt: DateTimeImmutable @1708796667 {#1850
date: 2024-02-24 18:44:27.0 +01:00
}
+createdAt: DateTimeImmutable @1703776154 {#2420
date: 2023-12-28 16:09:14.0 +01:00
}
}
+magazine: App\Entity\Magazine {#266}
+image: null
+parent: null
+root: null
+body: "When things lock up, will a kill -9 kill rsync or not? If it doesn’t, and the zpool status lockup is suspicious, it means things are stuck inside a system call. I’ve seen all sorts of horrible things with usb timeouts. Check your syslog."
+lang: "en"
+isAdult: false
+favouriteCount: 1
+score: 0
+lastActive: DateTime @1711558563 {#4378
date: 2024-03-27 17:56:03.0 +01:00
}
+ip: null
+tags: null
+mentions: [
"@isles@lemmy.world"
]
+children: Doctrine\ORM\PersistentCollection {#4384 …}
+nested: Doctrine\ORM\PersistentCollection {#4386 …}
+votes: Doctrine\ORM\PersistentCollection {#4388 …}
+reports: Doctrine\ORM\PersistentCollection {#4390 …}
+favourites: Doctrine\ORM\PersistentCollection {#4392 …}
+notifications: Doctrine\ORM\PersistentCollection {#4394 …}
-id: 272605
-bodyTs: "'-9':8 'call':32 'check':44 'doesn':15 'horribl':39 'insid':29 'kill':7,9 'lock':3 'lockup':21 'mean':25 'rsync':10 'seen':35 'sort':37 'status':20 'stuck':28 'suspici':23 'syslog':46 'system':31 'thing':2,26,40 'timeout':43 'usb':42 've':34 'zpool':19"
+ranking: 0
+commentCount: 0
+upVotes: 0
+downVotes: 0
+visibility: "visible "
+apId: "https://lemm.ee/comment/7677822"
+editedAt: null
+createdAt: DateTimeImmutable @1703816503 {#4379
date: 2023-12-29 03:21:43.0 +01:00
}
} |
|
Show voter details
|
49 |
DENIED
|
ROLE_USER
|
null |
|
Show voter details
|
50 |
DENIED
|
moderate
|
App\Entity\EntryComment {#4616
+user: Proxies\__CG__\App\Entity\User {#1970 …}
+entry: App\Entity\Entry {#2412
+user: Proxies\__CG__\App\Entity\User {#1970 …}
+magazine: App\Entity\Magazine {#266
+icon: Proxies\__CG__\App\Entity\Image {#247 …}
+name: "selfhosted@lemmy.world"
+title: "selfhosted"
+description: """
A place to share alternatives to popular online services that can be self-hosted without giving up privacy or locking you into a service you don’t control.\n
\n
Rules:\n
\n
- Be civil: we’re here to support and learn from one another. Insults won’t be tolerated. Flame wars are frowned upon.\n
- No spam posting.\n
- Don’t duplicate the full text of your blog or github here. Just post the link for folks to click.\n
- Submission headline should match the article title (don’t cherry-pick information from the title to fit your agenda).\n
- No trolling.\n
\n
Resources:\n
\n
- [awesome-selfhosted software](https://github.com/awesome-selfhosted/awesome-selfhosted)\n
- [awesome-sysadmin](https://github.com/awesome-foss/awesome-sysadmin) resources\n
- [Self-Hosted Podcast from Jupiter Broadcasting](https://selfhosted.show)\n
\n
> Any issues on the community? Report it using the report flag.\n
\n
> Questions? DM the mods!
"""
+rules: null
+subscriptionsCount: 1
+entryCount: 222
+entryCommentCount: 3916
+postCount: 0
+postCommentCount: 0
+isAdult: false
+customCss: null
+lastActive: DateTime @1729582735 {#276
date: 2024-10-22 09:38:55.0 +02:00
}
+markedForDeletionAt: null
+tags: null
+moderators: Doctrine\ORM\PersistentCollection {#238 …}
+ownershipRequests: Doctrine\ORM\PersistentCollection {#234 …}
+moderatorRequests: Doctrine\ORM\PersistentCollection {#223 …}
+entries: Doctrine\ORM\PersistentCollection {#181 …}
+posts: Doctrine\ORM\PersistentCollection {#139 …}
+subscriptions: Doctrine\ORM\PersistentCollection {#201 …}
+bans: Doctrine\ORM\PersistentCollection {#118 …}
+reports: Doctrine\ORM\PersistentCollection {#104 …}
+badges: Doctrine\ORM\PersistentCollection {#82 …}
+logs: Doctrine\ORM\PersistentCollection {#72 …}
+awards: Doctrine\ORM\PersistentCollection {#61 …}
+categories: Doctrine\ORM\PersistentCollection {#1820 …}
-id: 120
+apId: "selfhosted@lemmy.world"
+apProfileId: "https://lemmy.world/c/selfhosted"
+apPublicUrl: "https://lemmy.world/c/selfhosted"
+apFollowersUrl: "https://lemmy.world/c/selfhosted/followers"
+apInboxUrl: "https://lemmy.world/inbox"
+apDomain: "lemmy.world"
+apPreferredUsername: "selfhosted"
+apDiscoverable: true
+apManuallyApprovesFollowers: null
+privateKey: null
+publicKey: null
+apFetchedAt: DateTime @1703473826 {#270
date: 2023-12-25 04:10:26.0 +01:00
}
+apDeletedAt: null
+apTimeoutAt: null
+visibility: "visible "
+createdAt: DateTimeImmutable @1703473826 {#272
date: 2023-12-25 04:10:26.0 +01:00
}
}
+image: null
+domain: Proxies\__CG__\App\Entity\Domain {#1915 …}
+slug: "Question-ZFS-and-rsync"
+title: "Question - ZFS and rsync"
+url: null
+body: """
Hey fellow Selfhosters! I need some help, I think, and searching isn’t yielding what I’m hoping for.\n
\n
I recently built a new NAS for my network with 4x 18TB drives in a ZFS raidz1 pool. I previously have been using an external USB 12TB harddrive attached to a different machine.\n
\n
I’ve been attempting to use rsync to get the 12TB drive copied over to the new pool and things go great for the first 30-45 minutes. At that point, the current copy speed diminishes and 4 current files in progress sit at 100% done. Eventually, I’ve had to reboot the machine, because the zpool doesn’t appear accessible any longer. After reboot, the pool appears fine, no faults, and I can resume rsync for a while.\n
\n
EDIT: Of note, the rsync process seems to stall and I can’t get it to respect SIGINT or Ctrl+C. I can SSH in separately and running `zpool status` hangs with no output.\n
\n
While the workaround seems to be partially successful, the point of using rsync is to make it fairly hands-free and it’s been a week long process to copy the 3TB that I have now. I don’t think my zpool should be disappearing like that! Makes me nervous about the long-term viability. I don’t think I’m ready to drop down on Unraid.\n
\n
rsync is being initiated from the NAS to copy from the old server, am I better off “pushing” than “pulling”? I can’t imagine it’d make much difference.\n
\n
Could my drives be bad? How could I tell? They’re attached to a 10 port SATA card, could that be defective? How would I tell?\n
\n
Thanks for any help! I’ve dabbled in linux for a long time, but I’m far from proficient, so I don’t really know the intricacies of dmesg et al.
"""
+type: "article"
+lang: "en"
+isOc: false
+hasEmbed: false
+commentCount: 17
+favouriteCount: 36
+score: 0
+isAdult: false
+sticky: false
+lastActive: DateTime @1711558563 {#2410
date: 2024-03-27 17:56:03.0 +01:00
}
+ip: null
+adaAmount: 0
+tags: null
+mentions: null
+comments: Doctrine\ORM\PersistentCollection {#1884 …}
+votes: Doctrine\ORM\PersistentCollection {#1973 …}
+reports: Doctrine\ORM\PersistentCollection {#1959 …}
+favourites: Doctrine\ORM\PersistentCollection {#1927 …}
+notifications: Doctrine\ORM\PersistentCollection {#2442 …}
+badges: Doctrine\ORM\PersistentCollection {#2440 …}
+children: []
-id: 25403
-titleTs: "'question':1 'rsync':4 'zfs':2"
-bodyTs: "'-45':79 '10':278 '100':97 '12tb':46,63 '18tb':31 '30':78 '3tb':198 '4':90 '4x':30 'access':113 'al':320 'appear':112,120 'attach':48,275 'attempt':56 'bad':268 'better':250 'built':22 'c':152 'card':281 'copi':65,86,196,243 'could':264,270,282 'ctrl':151 'current':85,91 'd':260 'dabbl':296 'defect':285 'differ':51,263 'diminish':88 'disappear':211 'dmesg':318 'doesn':110 'done':98 'drive':32,64,266 'drop':231 'edit':132 'et':319 'eventu':99 'extern':44 'fair':183 'far':306 'fault':123 'fellow':2 'file':92 'fine':121 'first':77 'free':186 'get':61,145 'go':73 'great':74 'hand':185 'hands-fre':184 'hang':162 'harddriv':47 'help':7,293 'hey':1 'hope':18 'imagin':258 'initi':238 'intricaci':316 'isn':12 'know':314 'like':212 'linux':298 'long':193,220,301 'long-term':219 'longer':115 'm':17,228,305 'machin':52,106 'make':181,214,261 'minut':80 'much':262 'nas':25,241 'need':5 'nervous':216 'network':28 'new':24,69 'note':134 'old':246 'output':165 'partial':172 'point':83,175 'pool':37,70,119 'port':279 'previous':39 'process':137,194 'profici':308 'progress':94 'pull':254 'push':252 'raidz1':36 're':274 'readi':229 'realli':313 'reboot':104,117 'recent':21 'respect':148 'resum':127 'rsync':59,128,136,178,235 'run':159 'sata':280 'search':11 'seem':138,169 'selfhost':3 'separ':157 'server':247 'sigint':149 'sit':95 'speed':87 'ssh':155 'stall':140 'status':161 'success':173 'tell':272,289 'term':221 'thank':290 'thing':72 'think':9,206,226 'time':302 'unraid':234 'usb':45 'use':42,58,177 've':54,101,295 'viabil':222 'week':192 'workaround':168 'would':287 'yield':14 'zfs':35 'zpool':109,160,208"
+cross: false
+upVotes: 0
+downVotes: 0
+ranking: 1703862554
+visibility: "visible "
+apId: "https://lemmy.world/post/10061829"
+editedAt: DateTimeImmutable @1708796667 {#1850
date: 2024-02-24 18:44:27.0 +01:00
}
+createdAt: DateTimeImmutable @1703776154 {#2420
date: 2023-12-28 16:09:14.0 +01:00
}
}
+magazine: App\Entity\Magazine {#266}
+image: null
+parent: App\Entity\EntryComment {#4383
+user: App\Entity\User {#4396 …}
+entry: App\Entity\Entry {#2412}
+magazine: App\Entity\Magazine {#266}
+image: null
+parent: null
+root: null
+body: "When things lock up, will a kill -9 kill rsync or not? If it doesn’t, and the zpool status lockup is suspicious, it means things are stuck inside a system call. I’ve seen all sorts of horrible things with usb timeouts. Check your syslog."
+lang: "en"
+isAdult: false
+favouriteCount: 1
+score: 0
+lastActive: DateTime @1711558563 {#4378
date: 2024-03-27 17:56:03.0 +01:00
}
+ip: null
+tags: null
+mentions: [
"@isles@lemmy.world"
]
+children: Doctrine\ORM\PersistentCollection {#4384 …}
+nested: Doctrine\ORM\PersistentCollection {#4386 …}
+votes: Doctrine\ORM\PersistentCollection {#4388 …}
+reports: Doctrine\ORM\PersistentCollection {#4390 …}
+favourites: Doctrine\ORM\PersistentCollection {#4392 …}
+notifications: Doctrine\ORM\PersistentCollection {#4394 …}
-id: 272605
-bodyTs: "'-9':8 'call':32 'check':44 'doesn':15 'horribl':39 'insid':29 'kill':7,9 'lock':3 'lockup':21 'mean':25 'rsync':10 'seen':35 'sort':37 'status':20 'stuck':28 'suspici':23 'syslog':46 'system':31 'thing':2,26,40 'timeout':43 'usb':42 've':34 'zpool':19"
+ranking: 0
+commentCount: 0
+upVotes: 0
+downVotes: 0
+visibility: "visible "
+apId: "https://lemm.ee/comment/7677822"
+editedAt: null
+createdAt: DateTimeImmutable @1703816503 {#4379
date: 2023-12-29 03:21:43.0 +01:00
}
}
+root: App\Entity\EntryComment {#4383}
+body: """
> kill -9\n
\n
Just tested, thanks for the suggestion! It killed a few instances of `rsync`, but there are two apparently stuck open. I issued `reboot` and the system seemed to hang while waiting for `rsync` to be killed and failed to unmount the zpool.\n
\n
Syslog errors:\n
\n
```\n
\n
<span style="color:#323232;">Dec 31 16:53:34 halnas kernel: [54537.789982] #PF: error_code(0x0002) - not-present page\n
</span><span style="color:#323232;">Jan 1 12:57:19 halnas systemd[1]: Condition check resulted in Process error reports when automatic reporting is enabled (file watch) being skipped.\n
</span><span style="color:#323232;">Jan 1 12:57:19 halnas systemd[1]: Condition check resulted in Process error reports when automatic reporting is enabled (timer based) being skipped.\n
</span><span style="color:#323232;">Jan 1 12:57:19 halnas kernel: [ 1.119609] pcieport 0000:00:1b.0: DPC: error containment capabilities: Int Msg #0, RPExt+ PoisonedTLP+ SwTrigger+ RP PIO Log 4, DL_ActiveErr+\n
</span><span style="color:#323232;">Jan 1 12:57:19 halnas kernel: [ 1.120020] pcieport 0000:00:1d.2: DPC: error containment capabilities: Int Msg #0, RPExt+ PoisonedTLP+ SwTrigger+ RP PIO Log 4, DL_ActiveErr+\n
</span><span style="color:#323232;">Jan 1 12:57:19 halnas kernel: [ 1.120315] pcieport 0000:00:1d.3: DPC: error containment capabilities: Int Msg #0, RPExt+ PoisonedTLP+ SwTrigger+ RP PIO Log 4, DL_ActiveErr+\n
</span><span style="color:#323232;">Jan 1 22:59:08 halnas kernel: [ 1.119415] pcieport 0000:00:1b.0: DPC: error containment capabilities: Int Msg #0, RPExt+ PoisonedTLP+ SwTrigger+ RP PIO Log 4, DL_ActiveErr+\n
</span><span style="color:#323232;">Jan 1 22:59:08 halnas kernel: [ 1.119814] pcieport 0000:00:1d.2: DPC: error containment capabilities: Int Msg #0, RPExt+ PoisonedTLP+ SwTrigger+ RP PIO Log 4, DL_ActiveErr+\n
</span><span style="color:#323232;">Jan 1 22:59:08 halnas kernel: [ 1.120112] pcieport 0000:00:1d.3: DPC: error containment capabilities: Int Msg #0, RPExt+ PoisonedTLP+ SwTrigger+ RP PIO Log 4, DL_ActiveErr+\n
</span><span style="color:#323232;">Jan 1 22:59:08 halnas systemd[1]: Condition check resulted in Process error reports when automatic reporting is enabled (file watch) being skipped.\n
</span><span style="color:#323232;">Jan 1 22:59:08 halnas systemd[1]: Condition check resulted in Process error reports when automatic reporting is enabled (timer based) being skipped.\n
</span><span style="color:#323232;">Jan 2 02:23:18 halnas kernel: [12293.792282] gdbus[2809399]: segfault at 7ff71a8272e8 ip 00007ff7186f8045 sp 00007fffd5088de0 error 4 in libgio-2.0.so.0.7200.4[7ff718688000+111000]\n
</span><span style="color:#323232;">Jan 2 02:23:22 halnas kernel: [12297.315463] unattended-upgr[2810494]: segfault at 7f4c1e8552e8 ip 00007f4c1c726045 sp 00007ffd1b866230 error 4 in libgio-2.0.so.0.7200.4[7f4c1c6b6000+111000]\n
</span><span style="color:#323232;">Jan 2 03:46:29 halnas kernel: [17284.221594] #PF: error_code(0x0002) - not-present page\n
</span><span style="color:#323232;">Jan 2 06:09:50 halnas kernel: [25885.115060] unattended-upgr[4109474]: segfault at 7faa356252e8 ip 00007faa334f6045 sp 00007ffefed011a0 error 4 in libgio-2.0.so.0.7200.4[7faa33486000+111000]\n
</span><span style="color:#323232;">Jan 2 07:07:53 halnas kernel: [29368.241593] unattended-upgr[4109637]: segfault at 7f73f756c2e8 ip 00007f73f543d045 sp 00007ffc61f04ea0 error 4 in libgio-2.0.so.0.7200.4[7f73f53cd000+111000]\n
</span><span style="color:#323232;">Jan 2 09:12:52 halnas kernel: [36867.632220] pool-fwupdmgr[4109819]: segfault at 7fcf244832e8 ip 00007fcf22354045 sp 00007fcf1dc00770 error 4 in libgio-2.0.so.0.7200.4[7fcf222e4000+111000]\n
</span><span style="color:#323232;">Jan 2 12:37:50 halnas kernel: [49165.218100] #PF: error_code(0x0002) - not-present page\n
</span><span style="color:#323232;">Jan 2 19:57:53 halnas kernel: [75568.443218] unattended-upgr[4110958]: segfault at 7fc4cab112e8 ip 00007fc4c89e2045 sp 00007fffb4ae2d90 error 4 in libgio-2.0.so.0.7200.4[7fc4c8972000+111000]\n
</span><span style="color:#323232;">Jan 3 00:54:51 halnas snapd[1367]: stateengine.go:149: state ensure error: Post "https://api.snapcraft.io/v2/snaps/refresh": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)\n
</span>\n
```
"""
+lang: "en"
+isAdult: false
+favouriteCount: 0
+score: 0
+lastActive: DateTime @1704372538 {#4614
date: 2024-01-04 13:48:58.0 +01:00
}
+ip: null
+tags: [
"323232"
"pf"
]
+mentions: [
"@isles@lemmy.world"
"@Zoidberg@lemm.ee"
]
+children: Doctrine\ORM\PersistentCollection {#4617 …}
+nested: Doctrine\ORM\PersistentCollection {#4619 …}
+votes: Doctrine\ORM\PersistentCollection {#4621 …}
+reports: Doctrine\ORM\PersistentCollection {#4623 …}
+favourites: Doctrine\ORM\PersistentCollection {#4625 …}
+notifications: Doctrine\ORM\PersistentCollection {#4627 …}
-id: 279353
-bodyTs: "'+111000':350,376,418,444,470,512 '-9':2 '/v2/snaps/refresh':528 '0':129,157,185,213,241,269 '0.7200.4':348,374,416,442,468,510 '00':121,149,177,205,233,261,515 '0000':120,148,176,204,232,260 '00007f4c1c726045':367 '00007f73f543d045':435 '00007faa334f6045':409 '00007fc4c89e2045':503 '00007fcf1dc00770':463 '00007fcf22354045':461 '00007ff7186f8045':341 '00007ffc61f04ea0':437 '00007ffd1b866230':369 '00007ffefed011a0':411 '00007fffb4ae2d90':505 '00007fffd5088de0':343 '02':329,353 '03':379 '06':395 '07':421,422 '08':199,227,255,283,307 '09':396,447 '0x0002':58,388,482 '1':64,70,88,94,112,140,168,196,224,252,280,286,304,310 '1.119415':202 '1.119609':118 '1.119814':230 '1.120020':146 '1.120112':258 '1.120315':174 '12':65,89,113,141,169,448,473 '12293.792282':334 '12297.315463':358 '1367':520 '16':49 '17284.221594':384 '18':331 '19':67,91,115,143,171,489 '1b.0':122,206 '1d.2':150,234 '1d.3':178,262 '2':328,352,378,394,420,446,472,488 '22':197,225,253,281,305,355 '23':330,354 '25885.115060':400 '2809399':336 '2810494':362 '29':381 '29368.241593':426 '3':514 '31':48 '34':51 '36867.632220':452 '37':474 '4':136,164,192,220,248,276,345,371,413,439,465,507 '4109474':404 '4109637':430 '4109819':456 '4110958':498 '46':380 '49165.218100':478 '50':397,475 '51':517 '52':449 '53':50,423,491 '54':516 '54537.789982':54 '57':66,90,114,142,170,490 '59':198,226,254,282,306 '75568.443218':494 '7f4c1c6b6000':375 '7f4c1e8552e8':365 '7f73f53cd000':443 '7f73f756c2e8':433 '7faa33486000':417 '7faa356252e8':407 '7fc4c8972000':511 '7fc4cab112e8':501 '7fcf222e4000':469 '7fcf244832e8':459 '7ff718688000':349 '7ff71a8272e8':339 'activeerr':138,166,194,222,250,278 'api.snapcraft.io':527 'api.snapcraft.io/v2/snaps/refresh':526 'appar':20 'automat':79,103,295,319 'await':539 'base':108,324 'cancel':531 'capabl':126,154,182,210,238,266 'check':72,96,288,312 'client.timeout':536 'code':57,387,481 'condit':71,95,287,311 'connect':535 'contain':125,153,181,209,237,265 'dec':47 'dl':137,165,193,221,249,277 'dpc':123,151,179,207,235,263 'enabl':82,106,298,322 'ensur':523 'error':46,56,76,100,124,152,180,208,236,264,292,316,344,370,386,412,438,464,480,506,524 'exceed':537 'fail':40 'file':83,299 'fwupdmgr':455 'gdbus':335 'halna':52,68,92,116,144,172,200,228,256,284,308,332,356,382,398,424,450,476,492,518 'hang':31 'header':540 'instanc':13 'int':127,155,183,211,239,267 'ip':340,366,408,434,460,502 'issu':24 'jan':63,87,111,139,167,195,223,251,279,303,327,351,377,393,419,445,471,487,513 'kernel':53,117,145,173,201,229,257,333,357,383,399,425,451,477,493 'kill':1,10,38 'libgio-2.0.so':347,373,415,441,467,509 'log':135,163,191,219,247,275 'msg':128,156,184,212,240,268 'net/http':529 'not-pres':59,389,483 'open':22 'page':62,392,486 'pcieport':119,147,175,203,231,259 'pf':55,385,479 'pio':134,162,190,218,246,274 'poisonedtlp':131,159,187,215,243,271 'pool':454 'pool-fwupdmgr':453 'post':525 'present':61,391,485 'process':75,99,291,315 'reboot':25 'report':77,80,101,104,293,296,317,320 'request':530 'result':73,97,289,313 'rp':133,161,189,217,245,273 'rpext':130,158,186,214,242,270 'rsync':15,35 'seem':29 'segfault':337,363,405,431,457,499 'skip':86,110,302,326 'snapd':519 'sp':342,368,410,436,462,504 'state':522 'stateengine.go:149':521 'stuck':21 'suggest':8 'swtrigger':132,160,188,216,244,272 'syslog':45 'system':28 'systemd':69,93,285,309 'test':4 'thank':5 'timer':107,323 'two':19 'unattend':360,402,428,496 'unattended-upgr':359,401,427,495 'unmount':42 'upgr':361,403,429,497 'wait':33,533 'watch':84,300 'zpool':44"
+ranking: 0
+commentCount: 0
+upVotes: 0
+downVotes: 0
+visibility: "visible "
+apId: "https://lemmy.world/comment/6399582"
+editedAt: null
+createdAt: DateTimeImmutable @1704372538 {#4615
date: 2024-01-04 13:48:58.0 +01:00
}
} |
|
Show voter details
|
51 |
DENIED
|
edit
|
App\Entity\EntryComment {#4616
+user: Proxies\__CG__\App\Entity\User {#1970 …}
+entry: App\Entity\Entry {#2412
+user: Proxies\__CG__\App\Entity\User {#1970 …}
+magazine: App\Entity\Magazine {#266
+icon: Proxies\__CG__\App\Entity\Image {#247 …}
+name: "selfhosted@lemmy.world"
+title: "selfhosted"
+description: """
A place to share alternatives to popular online services that can be self-hosted without giving up privacy or locking you into a service you don’t control.\n
\n
Rules:\n
\n
- Be civil: we’re here to support and learn from one another. Insults won’t be tolerated. Flame wars are frowned upon.\n
- No spam posting.\n
- Don’t duplicate the full text of your blog or github here. Just post the link for folks to click.\n
- Submission headline should match the article title (don’t cherry-pick information from the title to fit your agenda).\n
- No trolling.\n
\n
Resources:\n
\n
- [awesome-selfhosted software](https://github.com/awesome-selfhosted/awesome-selfhosted)\n
- [awesome-sysadmin](https://github.com/awesome-foss/awesome-sysadmin) resources\n
- [Self-Hosted Podcast from Jupiter Broadcasting](https://selfhosted.show)\n
\n
> Any issues on the community? Report it using the report flag.\n
\n
> Questions? DM the mods!
"""
+rules: null
+subscriptionsCount: 1
+entryCount: 222
+entryCommentCount: 3916
+postCount: 0
+postCommentCount: 0
+isAdult: false
+customCss: null
+lastActive: DateTime @1729582735 {#276
date: 2024-10-22 09:38:55.0 +02:00
}
+markedForDeletionAt: null
+tags: null
+moderators: Doctrine\ORM\PersistentCollection {#238 …}
+ownershipRequests: Doctrine\ORM\PersistentCollection {#234 …}
+moderatorRequests: Doctrine\ORM\PersistentCollection {#223 …}
+entries: Doctrine\ORM\PersistentCollection {#181 …}
+posts: Doctrine\ORM\PersistentCollection {#139 …}
+subscriptions: Doctrine\ORM\PersistentCollection {#201 …}
+bans: Doctrine\ORM\PersistentCollection {#118 …}
+reports: Doctrine\ORM\PersistentCollection {#104 …}
+badges: Doctrine\ORM\PersistentCollection {#82 …}
+logs: Doctrine\ORM\PersistentCollection {#72 …}
+awards: Doctrine\ORM\PersistentCollection {#61 …}
+categories: Doctrine\ORM\PersistentCollection {#1820 …}
-id: 120
+apId: "selfhosted@lemmy.world"
+apProfileId: "https://lemmy.world/c/selfhosted"
+apPublicUrl: "https://lemmy.world/c/selfhosted"
+apFollowersUrl: "https://lemmy.world/c/selfhosted/followers"
+apInboxUrl: "https://lemmy.world/inbox"
+apDomain: "lemmy.world"
+apPreferredUsername: "selfhosted"
+apDiscoverable: true
+apManuallyApprovesFollowers: null
+privateKey: null
+publicKey: null
+apFetchedAt: DateTime @1703473826 {#270
date: 2023-12-25 04:10:26.0 +01:00
}
+apDeletedAt: null
+apTimeoutAt: null
+visibility: "visible "
+createdAt: DateTimeImmutable @1703473826 {#272
date: 2023-12-25 04:10:26.0 +01:00
}
}
+image: null
+domain: Proxies\__CG__\App\Entity\Domain {#1915 …}
+slug: "Question-ZFS-and-rsync"
+title: "Question - ZFS and rsync"
+url: null
+body: """
Hey fellow Selfhosters! I need some help, I think, and searching isn’t yielding what I’m hoping for.\n
\n
I recently built a new NAS for my network with 4x 18TB drives in a ZFS raidz1 pool. I previously have been using an external USB 12TB harddrive attached to a different machine.\n
\n
I’ve been attempting to use rsync to get the 12TB drive copied over to the new pool and things go great for the first 30-45 minutes. At that point, the current copy speed diminishes and 4 current files in progress sit at 100% done. Eventually, I’ve had to reboot the machine, because the zpool doesn’t appear accessible any longer. After reboot, the pool appears fine, no faults, and I can resume rsync for a while.\n
\n
EDIT: Of note, the rsync process seems to stall and I can’t get it to respect SIGINT or Ctrl+C. I can SSH in separately and running `zpool status` hangs with no output.\n
\n
While the workaround seems to be partially successful, the point of using rsync is to make it fairly hands-free and it’s been a week long process to copy the 3TB that I have now. I don’t think my zpool should be disappearing like that! Makes me nervous about the long-term viability. I don’t think I’m ready to drop down on Unraid.\n
\n
rsync is being initiated from the NAS to copy from the old server, am I better off “pushing” than “pulling”? I can’t imagine it’d make much difference.\n
\n
Could my drives be bad? How could I tell? They’re attached to a 10 port SATA card, could that be defective? How would I tell?\n
\n
Thanks for any help! I’ve dabbled in linux for a long time, but I’m far from proficient, so I don’t really know the intricacies of dmesg et al.
"""
+type: "article"
+lang: "en"
+isOc: false
+hasEmbed: false
+commentCount: 17
+favouriteCount: 36
+score: 0
+isAdult: false
+sticky: false
+lastActive: DateTime @1711558563 {#2410
date: 2024-03-27 17:56:03.0 +01:00
}
+ip: null
+adaAmount: 0
+tags: null
+mentions: null
+comments: Doctrine\ORM\PersistentCollection {#1884 …}
+votes: Doctrine\ORM\PersistentCollection {#1973 …}
+reports: Doctrine\ORM\PersistentCollection {#1959 …}
+favourites: Doctrine\ORM\PersistentCollection {#1927 …}
+notifications: Doctrine\ORM\PersistentCollection {#2442 …}
+badges: Doctrine\ORM\PersistentCollection {#2440 …}
+children: []
-id: 25403
-titleTs: "'question':1 'rsync':4 'zfs':2"
-bodyTs: "'-45':79 '10':278 '100':97 '12tb':46,63 '18tb':31 '30':78 '3tb':198 '4':90 '4x':30 'access':113 'al':320 'appear':112,120 'attach':48,275 'attempt':56 'bad':268 'better':250 'built':22 'c':152 'card':281 'copi':65,86,196,243 'could':264,270,282 'ctrl':151 'current':85,91 'd':260 'dabbl':296 'defect':285 'differ':51,263 'diminish':88 'disappear':211 'dmesg':318 'doesn':110 'done':98 'drive':32,64,266 'drop':231 'edit':132 'et':319 'eventu':99 'extern':44 'fair':183 'far':306 'fault':123 'fellow':2 'file':92 'fine':121 'first':77 'free':186 'get':61,145 'go':73 'great':74 'hand':185 'hands-fre':184 'hang':162 'harddriv':47 'help':7,293 'hey':1 'hope':18 'imagin':258 'initi':238 'intricaci':316 'isn':12 'know':314 'like':212 'linux':298 'long':193,220,301 'long-term':219 'longer':115 'm':17,228,305 'machin':52,106 'make':181,214,261 'minut':80 'much':262 'nas':25,241 'need':5 'nervous':216 'network':28 'new':24,69 'note':134 'old':246 'output':165 'partial':172 'point':83,175 'pool':37,70,119 'port':279 'previous':39 'process':137,194 'profici':308 'progress':94 'pull':254 'push':252 'raidz1':36 're':274 'readi':229 'realli':313 'reboot':104,117 'recent':21 'respect':148 'resum':127 'rsync':59,128,136,178,235 'run':159 'sata':280 'search':11 'seem':138,169 'selfhost':3 'separ':157 'server':247 'sigint':149 'sit':95 'speed':87 'ssh':155 'stall':140 'status':161 'success':173 'tell':272,289 'term':221 'thank':290 'thing':72 'think':9,206,226 'time':302 'unraid':234 'usb':45 'use':42,58,177 've':54,101,295 'viabil':222 'week':192 'workaround':168 'would':287 'yield':14 'zfs':35 'zpool':109,160,208"
+cross: false
+upVotes: 0
+downVotes: 0
+ranking: 1703862554
+visibility: "visible "
+apId: "https://lemmy.world/post/10061829"
+editedAt: DateTimeImmutable @1708796667 {#1850
date: 2024-02-24 18:44:27.0 +01:00
}
+createdAt: DateTimeImmutable @1703776154 {#2420
date: 2023-12-28 16:09:14.0 +01:00
}
}
+magazine: App\Entity\Magazine {#266}
+image: null
+parent: App\Entity\EntryComment {#4383
+user: App\Entity\User {#4396 …}
+entry: App\Entity\Entry {#2412}
+magazine: App\Entity\Magazine {#266}
+image: null
+parent: null
+root: null
+body: "When things lock up, will a kill -9 kill rsync or not? If it doesn’t, and the zpool status lockup is suspicious, it means things are stuck inside a system call. I’ve seen all sorts of horrible things with usb timeouts. Check your syslog."
+lang: "en"
+isAdult: false
+favouriteCount: 1
+score: 0
+lastActive: DateTime @1711558563 {#4378
date: 2024-03-27 17:56:03.0 +01:00
}
+ip: null
+tags: null
+mentions: [
"@isles@lemmy.world"
]
+children: Doctrine\ORM\PersistentCollection {#4384 …}
+nested: Doctrine\ORM\PersistentCollection {#4386 …}
+votes: Doctrine\ORM\PersistentCollection {#4388 …}
+reports: Doctrine\ORM\PersistentCollection {#4390 …}
+favourites: Doctrine\ORM\PersistentCollection {#4392 …}
+notifications: Doctrine\ORM\PersistentCollection {#4394 …}
-id: 272605
-bodyTs: "'-9':8 'call':32 'check':44 'doesn':15 'horribl':39 'insid':29 'kill':7,9 'lock':3 'lockup':21 'mean':25 'rsync':10 'seen':35 'sort':37 'status':20 'stuck':28 'suspici':23 'syslog':46 'system':31 'thing':2,26,40 'timeout':43 'usb':42 've':34 'zpool':19"
+ranking: 0
+commentCount: 0
+upVotes: 0
+downVotes: 0
+visibility: "visible "
+apId: "https://lemm.ee/comment/7677822"
+editedAt: null
+createdAt: DateTimeImmutable @1703816503 {#4379
date: 2023-12-29 03:21:43.0 +01:00
}
}
+root: App\Entity\EntryComment {#4383}
+body: """
> kill -9\n
\n
Just tested, thanks for the suggestion! It killed a few instances of `rsync`, but there are two apparently stuck open. I issued `reboot` and the system seemed to hang while waiting for `rsync` to be killed and failed to unmount the zpool.\n
\n
Syslog errors:\n
\n
```\n
\n
<span style="color:#323232;">Dec 31 16:53:34 halnas kernel: [54537.789982] #PF: error_code(0x0002) - not-present page\n
</span><span style="color:#323232;">Jan 1 12:57:19 halnas systemd[1]: Condition check resulted in Process error reports when automatic reporting is enabled (file watch) being skipped.\n
</span><span style="color:#323232;">Jan 1 12:57:19 halnas systemd[1]: Condition check resulted in Process error reports when automatic reporting is enabled (timer based) being skipped.\n
</span><span style="color:#323232;">Jan 1 12:57:19 halnas kernel: [ 1.119609] pcieport 0000:00:1b.0: DPC: error containment capabilities: Int Msg #0, RPExt+ PoisonedTLP+ SwTrigger+ RP PIO Log 4, DL_ActiveErr+\n
</span><span style="color:#323232;">Jan 1 12:57:19 halnas kernel: [ 1.120020] pcieport 0000:00:1d.2: DPC: error containment capabilities: Int Msg #0, RPExt+ PoisonedTLP+ SwTrigger+ RP PIO Log 4, DL_ActiveErr+\n
</span><span style="color:#323232;">Jan 1 12:57:19 halnas kernel: [ 1.120315] pcieport 0000:00:1d.3: DPC: error containment capabilities: Int Msg #0, RPExt+ PoisonedTLP+ SwTrigger+ RP PIO Log 4, DL_ActiveErr+\n
</span><span style="color:#323232;">Jan 1 22:59:08 halnas kernel: [ 1.119415] pcieport 0000:00:1b.0: DPC: error containment capabilities: Int Msg #0, RPExt+ PoisonedTLP+ SwTrigger+ RP PIO Log 4, DL_ActiveErr+\n
</span><span style="color:#323232;">Jan 1 22:59:08 halnas kernel: [ 1.119814] pcieport 0000:00:1d.2: DPC: error containment capabilities: Int Msg #0, RPExt+ PoisonedTLP+ SwTrigger+ RP PIO Log 4, DL_ActiveErr+\n
</span><span style="color:#323232;">Jan 1 22:59:08 halnas kernel: [ 1.120112] pcieport 0000:00:1d.3: DPC: error containment capabilities: Int Msg #0, RPExt+ PoisonedTLP+ SwTrigger+ RP PIO Log 4, DL_ActiveErr+\n
</span><span style="color:#323232;">Jan 1 22:59:08 halnas systemd[1]: Condition check resulted in Process error reports when automatic reporting is enabled (file watch) being skipped.\n
</span><span style="color:#323232;">Jan 1 22:59:08 halnas systemd[1]: Condition check resulted in Process error reports when automatic reporting is enabled (timer based) being skipped.\n
</span><span style="color:#323232;">Jan 2 02:23:18 halnas kernel: [12293.792282] gdbus[2809399]: segfault at 7ff71a8272e8 ip 00007ff7186f8045 sp 00007fffd5088de0 error 4 in libgio-2.0.so.0.7200.4[7ff718688000+111000]\n
</span><span style="color:#323232;">Jan 2 02:23:22 halnas kernel: [12297.315463] unattended-upgr[2810494]: segfault at 7f4c1e8552e8 ip 00007f4c1c726045 sp 00007ffd1b866230 error 4 in libgio-2.0.so.0.7200.4[7f4c1c6b6000+111000]\n
</span><span style="color:#323232;">Jan 2 03:46:29 halnas kernel: [17284.221594] #PF: error_code(0x0002) - not-present page\n
</span><span style="color:#323232;">Jan 2 06:09:50 halnas kernel: [25885.115060] unattended-upgr[4109474]: segfault at 7faa356252e8 ip 00007faa334f6045 sp 00007ffefed011a0 error 4 in libgio-2.0.so.0.7200.4[7faa33486000+111000]\n
</span><span style="color:#323232;">Jan 2 07:07:53 halnas kernel: [29368.241593] unattended-upgr[4109637]: segfault at 7f73f756c2e8 ip 00007f73f543d045 sp 00007ffc61f04ea0 error 4 in libgio-2.0.so.0.7200.4[7f73f53cd000+111000]\n
</span><span style="color:#323232;">Jan 2 09:12:52 halnas kernel: [36867.632220] pool-fwupdmgr[4109819]: segfault at 7fcf244832e8 ip 00007fcf22354045 sp 00007fcf1dc00770 error 4 in libgio-2.0.so.0.7200.4[7fcf222e4000+111000]\n
</span><span style="color:#323232;">Jan 2 12:37:50 halnas kernel: [49165.218100] #PF: error_code(0x0002) - not-present page\n
</span><span style="color:#323232;">Jan 2 19:57:53 halnas kernel: [75568.443218] unattended-upgr[4110958]: segfault at 7fc4cab112e8 ip 00007fc4c89e2045 sp 00007fffb4ae2d90 error 4 in libgio-2.0.so.0.7200.4[7fc4c8972000+111000]\n
</span><span style="color:#323232;">Jan 3 00:54:51 halnas snapd[1367]: stateengine.go:149: state ensure error: Post "https://api.snapcraft.io/v2/snaps/refresh": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)\n
</span>\n
```
"""
+lang: "en"
+isAdult: false
+favouriteCount: 0
+score: 0
+lastActive: DateTime @1704372538 {#4614
date: 2024-01-04 13:48:58.0 +01:00
}
+ip: null
+tags: [
"323232"
"pf"
]
+mentions: [
"@isles@lemmy.world"
"@Zoidberg@lemm.ee"
]
+children: Doctrine\ORM\PersistentCollection {#4617 …}
+nested: Doctrine\ORM\PersistentCollection {#4619 …}
+votes: Doctrine\ORM\PersistentCollection {#4621 …}
+reports: Doctrine\ORM\PersistentCollection {#4623 …}
+favourites: Doctrine\ORM\PersistentCollection {#4625 …}
+notifications: Doctrine\ORM\PersistentCollection {#4627 …}
-id: 279353
-bodyTs: "'+111000':350,376,418,444,470,512 '-9':2 '/v2/snaps/refresh':528 '0':129,157,185,213,241,269 '0.7200.4':348,374,416,442,468,510 '00':121,149,177,205,233,261,515 '0000':120,148,176,204,232,260 '00007f4c1c726045':367 '00007f73f543d045':435 '00007faa334f6045':409 '00007fc4c89e2045':503 '00007fcf1dc00770':463 '00007fcf22354045':461 '00007ff7186f8045':341 '00007ffc61f04ea0':437 '00007ffd1b866230':369 '00007ffefed011a0':411 '00007fffb4ae2d90':505 '00007fffd5088de0':343 '02':329,353 '03':379 '06':395 '07':421,422 '08':199,227,255,283,307 '09':396,447 '0x0002':58,388,482 '1':64,70,88,94,112,140,168,196,224,252,280,286,304,310 '1.119415':202 '1.119609':118 '1.119814':230 '1.120020':146 '1.120112':258 '1.120315':174 '12':65,89,113,141,169,448,473 '12293.792282':334 '12297.315463':358 '1367':520 '16':49 '17284.221594':384 '18':331 '19':67,91,115,143,171,489 '1b.0':122,206 '1d.2':150,234 '1d.3':178,262 '2':328,352,378,394,420,446,472,488 '22':197,225,253,281,305,355 '23':330,354 '25885.115060':400 '2809399':336 '2810494':362 '29':381 '29368.241593':426 '3':514 '31':48 '34':51 '36867.632220':452 '37':474 '4':136,164,192,220,248,276,345,371,413,439,465,507 '4109474':404 '4109637':430 '4109819':456 '4110958':498 '46':380 '49165.218100':478 '50':397,475 '51':517 '52':449 '53':50,423,491 '54':516 '54537.789982':54 '57':66,90,114,142,170,490 '59':198,226,254,282,306 '75568.443218':494 '7f4c1c6b6000':375 '7f4c1e8552e8':365 '7f73f53cd000':443 '7f73f756c2e8':433 '7faa33486000':417 '7faa356252e8':407 '7fc4c8972000':511 '7fc4cab112e8':501 '7fcf222e4000':469 '7fcf244832e8':459 '7ff718688000':349 '7ff71a8272e8':339 'activeerr':138,166,194,222,250,278 'api.snapcraft.io':527 'api.snapcraft.io/v2/snaps/refresh':526 'appar':20 'automat':79,103,295,319 'await':539 'base':108,324 'cancel':531 'capabl':126,154,182,210,238,266 'check':72,96,288,312 'client.timeout':536 'code':57,387,481 'condit':71,95,287,311 'connect':535 'contain':125,153,181,209,237,265 'dec':47 'dl':137,165,193,221,249,277 'dpc':123,151,179,207,235,263 'enabl':82,106,298,322 'ensur':523 'error':46,56,76,100,124,152,180,208,236,264,292,316,344,370,386,412,438,464,480,506,524 'exceed':537 'fail':40 'file':83,299 'fwupdmgr':455 'gdbus':335 'halna':52,68,92,116,144,172,200,228,256,284,308,332,356,382,398,424,450,476,492,518 'hang':31 'header':540 'instanc':13 'int':127,155,183,211,239,267 'ip':340,366,408,434,460,502 'issu':24 'jan':63,87,111,139,167,195,223,251,279,303,327,351,377,393,419,445,471,487,513 'kernel':53,117,145,173,201,229,257,333,357,383,399,425,451,477,493 'kill':1,10,38 'libgio-2.0.so':347,373,415,441,467,509 'log':135,163,191,219,247,275 'msg':128,156,184,212,240,268 'net/http':529 'not-pres':59,389,483 'open':22 'page':62,392,486 'pcieport':119,147,175,203,231,259 'pf':55,385,479 'pio':134,162,190,218,246,274 'poisonedtlp':131,159,187,215,243,271 'pool':454 'pool-fwupdmgr':453 'post':525 'present':61,391,485 'process':75,99,291,315 'reboot':25 'report':77,80,101,104,293,296,317,320 'request':530 'result':73,97,289,313 'rp':133,161,189,217,245,273 'rpext':130,158,186,214,242,270 'rsync':15,35 'seem':29 'segfault':337,363,405,431,457,499 'skip':86,110,302,326 'snapd':519 'sp':342,368,410,436,462,504 'state':522 'stateengine.go:149':521 'stuck':21 'suggest':8 'swtrigger':132,160,188,216,244,272 'syslog':45 'system':28 'systemd':69,93,285,309 'test':4 'thank':5 'timer':107,323 'two':19 'unattend':360,402,428,496 'unattended-upgr':359,401,427,495 'unmount':42 'upgr':361,403,429,497 'wait':33,533 'watch':84,300 'zpool':44"
+ranking: 0
+commentCount: 0
+upVotes: 0
+downVotes: 0
+visibility: "visible "
+apId: "https://lemmy.world/comment/6399582"
+editedAt: null
+createdAt: DateTimeImmutable @1704372538 {#4615
date: 2024-01-04 13:48:58.0 +01:00
}
} |
|
Show voter details
|
52 |
DENIED
|
moderate
|
App\Entity\EntryComment {#4616
+user: Proxies\__CG__\App\Entity\User {#1970 …}
+entry: App\Entity\Entry {#2412
+user: Proxies\__CG__\App\Entity\User {#1970 …}
+magazine: App\Entity\Magazine {#266
+icon: Proxies\__CG__\App\Entity\Image {#247 …}
+name: "selfhosted@lemmy.world"
+title: "selfhosted"
+description: """
A place to share alternatives to popular online services that can be self-hosted without giving up privacy or locking you into a service you don’t control.\n
\n
Rules:\n
\n
- Be civil: we’re here to support and learn from one another. Insults won’t be tolerated. Flame wars are frowned upon.\n
- No spam posting.\n
- Don’t duplicate the full text of your blog or github here. Just post the link for folks to click.\n
- Submission headline should match the article title (don’t cherry-pick information from the title to fit your agenda).\n
- No trolling.\n
\n
Resources:\n
\n
- [awesome-selfhosted software](https://github.com/awesome-selfhosted/awesome-selfhosted)\n
- [awesome-sysadmin](https://github.com/awesome-foss/awesome-sysadmin) resources\n
- [Self-Hosted Podcast from Jupiter Broadcasting](https://selfhosted.show)\n
\n
> Any issues on the community? Report it using the report flag.\n
\n
> Questions? DM the mods!
"""
+rules: null
+subscriptionsCount: 1
+entryCount: 222
+entryCommentCount: 3916
+postCount: 0
+postCommentCount: 0
+isAdult: false
+customCss: null
+lastActive: DateTime @1729582735 {#276
date: 2024-10-22 09:38:55.0 +02:00
}
+markedForDeletionAt: null
+tags: null
+moderators: Doctrine\ORM\PersistentCollection {#238 …}
+ownershipRequests: Doctrine\ORM\PersistentCollection {#234 …}
+moderatorRequests: Doctrine\ORM\PersistentCollection {#223 …}
+entries: Doctrine\ORM\PersistentCollection {#181 …}
+posts: Doctrine\ORM\PersistentCollection {#139 …}
+subscriptions: Doctrine\ORM\PersistentCollection {#201 …}
+bans: Doctrine\ORM\PersistentCollection {#118 …}
+reports: Doctrine\ORM\PersistentCollection {#104 …}
+badges: Doctrine\ORM\PersistentCollection {#82 …}
+logs: Doctrine\ORM\PersistentCollection {#72 …}
+awards: Doctrine\ORM\PersistentCollection {#61 …}
+categories: Doctrine\ORM\PersistentCollection {#1820 …}
-id: 120
+apId: "selfhosted@lemmy.world"
+apProfileId: "https://lemmy.world/c/selfhosted"
+apPublicUrl: "https://lemmy.world/c/selfhosted"
+apFollowersUrl: "https://lemmy.world/c/selfhosted/followers"
+apInboxUrl: "https://lemmy.world/inbox"
+apDomain: "lemmy.world"
+apPreferredUsername: "selfhosted"
+apDiscoverable: true
+apManuallyApprovesFollowers: null
+privateKey: null
+publicKey: null
+apFetchedAt: DateTime @1703473826 {#270
date: 2023-12-25 04:10:26.0 +01:00
}
+apDeletedAt: null
+apTimeoutAt: null
+visibility: "visible "
+createdAt: DateTimeImmutable @1703473826 {#272
date: 2023-12-25 04:10:26.0 +01:00
}
}
+image: null
+domain: Proxies\__CG__\App\Entity\Domain {#1915 …}
+slug: "Question-ZFS-and-rsync"
+title: "Question - ZFS and rsync"
+url: null
+body: """
Hey fellow Selfhosters! I need some help, I think, and searching isn’t yielding what I’m hoping for.\n
\n
I recently built a new NAS for my network with 4x 18TB drives in a ZFS raidz1 pool. I previously have been using an external USB 12TB harddrive attached to a different machine.\n
\n
I’ve been attempting to use rsync to get the 12TB drive copied over to the new pool and things go great for the first 30-45 minutes. At that point, the current copy speed diminishes and 4 current files in progress sit at 100% done. Eventually, I’ve had to reboot the machine, because the zpool doesn’t appear accessible any longer. After reboot, the pool appears fine, no faults, and I can resume rsync for a while.\n
\n
EDIT: Of note, the rsync process seems to stall and I can’t get it to respect SIGINT or Ctrl+C. I can SSH in separately and running `zpool status` hangs with no output.\n
\n
While the workaround seems to be partially successful, the point of using rsync is to make it fairly hands-free and it’s been a week long process to copy the 3TB that I have now. I don’t think my zpool should be disappearing like that! Makes me nervous about the long-term viability. I don’t think I’m ready to drop down on Unraid.\n
\n
rsync is being initiated from the NAS to copy from the old server, am I better off “pushing” than “pulling”? I can’t imagine it’d make much difference.\n
\n
Could my drives be bad? How could I tell? They’re attached to a 10 port SATA card, could that be defective? How would I tell?\n
\n
Thanks for any help! I’ve dabbled in linux for a long time, but I’m far from proficient, so I don’t really know the intricacies of dmesg et al.
"""
+type: "article"
+lang: "en"
+isOc: false
+hasEmbed: false
+commentCount: 17
+favouriteCount: 36
+score: 0
+isAdult: false
+sticky: false
+lastActive: DateTime @1711558563 {#2410
date: 2024-03-27 17:56:03.0 +01:00
}
+ip: null
+adaAmount: 0
+tags: null
+mentions: null
+comments: Doctrine\ORM\PersistentCollection {#1884 …}
+votes: Doctrine\ORM\PersistentCollection {#1973 …}
+reports: Doctrine\ORM\PersistentCollection {#1959 …}
+favourites: Doctrine\ORM\PersistentCollection {#1927 …}
+notifications: Doctrine\ORM\PersistentCollection {#2442 …}
+badges: Doctrine\ORM\PersistentCollection {#2440 …}
+children: []
-id: 25403
-titleTs: "'question':1 'rsync':4 'zfs':2"
-bodyTs: "'-45':79 '10':278 '100':97 '12tb':46,63 '18tb':31 '30':78 '3tb':198 '4':90 '4x':30 'access':113 'al':320 'appear':112,120 'attach':48,275 'attempt':56 'bad':268 'better':250 'built':22 'c':152 'card':281 'copi':65,86,196,243 'could':264,270,282 'ctrl':151 'current':85,91 'd':260 'dabbl':296 'defect':285 'differ':51,263 'diminish':88 'disappear':211 'dmesg':318 'doesn':110 'done':98 'drive':32,64,266 'drop':231 'edit':132 'et':319 'eventu':99 'extern':44 'fair':183 'far':306 'fault':123 'fellow':2 'file':92 'fine':121 'first':77 'free':186 'get':61,145 'go':73 'great':74 'hand':185 'hands-fre':184 'hang':162 'harddriv':47 'help':7,293 'hey':1 'hope':18 'imagin':258 'initi':238 'intricaci':316 'isn':12 'know':314 'like':212 'linux':298 'long':193,220,301 'long-term':219 'longer':115 'm':17,228,305 'machin':52,106 'make':181,214,261 'minut':80 'much':262 'nas':25,241 'need':5 'nervous':216 'network':28 'new':24,69 'note':134 'old':246 'output':165 'partial':172 'point':83,175 'pool':37,70,119 'port':279 'previous':39 'process':137,194 'profici':308 'progress':94 'pull':254 'push':252 'raidz1':36 're':274 'readi':229 'realli':313 'reboot':104,117 'recent':21 'respect':148 'resum':127 'rsync':59,128,136,178,235 'run':159 'sata':280 'search':11 'seem':138,169 'selfhost':3 'separ':157 'server':247 'sigint':149 'sit':95 'speed':87 'ssh':155 'stall':140 'status':161 'success':173 'tell':272,289 'term':221 'thank':290 'thing':72 'think':9,206,226 'time':302 'unraid':234 'usb':45 'use':42,58,177 've':54,101,295 'viabil':222 'week':192 'workaround':168 'would':287 'yield':14 'zfs':35 'zpool':109,160,208"
+cross: false
+upVotes: 0
+downVotes: 0
+ranking: 1703862554
+visibility: "visible "
+apId: "https://lemmy.world/post/10061829"
+editedAt: DateTimeImmutable @1708796667 {#1850
date: 2024-02-24 18:44:27.0 +01:00
}
+createdAt: DateTimeImmutable @1703776154 {#2420
date: 2023-12-28 16:09:14.0 +01:00
}
}
+magazine: App\Entity\Magazine {#266}
+image: null
+parent: App\Entity\EntryComment {#4383
+user: App\Entity\User {#4396 …}
+entry: App\Entity\Entry {#2412}
+magazine: App\Entity\Magazine {#266}
+image: null
+parent: null
+root: null
+body: "When things lock up, will a kill -9 kill rsync or not? If it doesn’t, and the zpool status lockup is suspicious, it means things are stuck inside a system call. I’ve seen all sorts of horrible things with usb timeouts. Check your syslog."
+lang: "en"
+isAdult: false
+favouriteCount: 1
+score: 0
+lastActive: DateTime @1711558563 {#4378
date: 2024-03-27 17:56:03.0 +01:00
}
+ip: null
+tags: null
+mentions: [
"@isles@lemmy.world"
]
+children: Doctrine\ORM\PersistentCollection {#4384 …}
+nested: Doctrine\ORM\PersistentCollection {#4386 …}
+votes: Doctrine\ORM\PersistentCollection {#4388 …}
+reports: Doctrine\ORM\PersistentCollection {#4390 …}
+favourites: Doctrine\ORM\PersistentCollection {#4392 …}
+notifications: Doctrine\ORM\PersistentCollection {#4394 …}
-id: 272605
-bodyTs: "'-9':8 'call':32 'check':44 'doesn':15 'horribl':39 'insid':29 'kill':7,9 'lock':3 'lockup':21 'mean':25 'rsync':10 'seen':35 'sort':37 'status':20 'stuck':28 'suspici':23 'syslog':46 'system':31 'thing':2,26,40 'timeout':43 'usb':42 've':34 'zpool':19"
+ranking: 0
+commentCount: 0
+upVotes: 0
+downVotes: 0
+visibility: "visible "
+apId: "https://lemm.ee/comment/7677822"
+editedAt: null
+createdAt: DateTimeImmutable @1703816503 {#4379
date: 2023-12-29 03:21:43.0 +01:00
}
}
+root: App\Entity\EntryComment {#4383}
+body: """
> kill -9\n
\n
Just tested, thanks for the suggestion! It killed a few instances of `rsync`, but there are two apparently stuck open. I issued `reboot` and the system seemed to hang while waiting for `rsync` to be killed and failed to unmount the zpool.\n
\n
Syslog errors:\n
\n
```\n
\n
<span style="color:#323232;">Dec 31 16:53:34 halnas kernel: [54537.789982] #PF: error_code(0x0002) - not-present page\n
</span><span style="color:#323232;">Jan 1 12:57:19 halnas systemd[1]: Condition check resulted in Process error reports when automatic reporting is enabled (file watch) being skipped.\n
</span><span style="color:#323232;">Jan 1 12:57:19 halnas systemd[1]: Condition check resulted in Process error reports when automatic reporting is enabled (timer based) being skipped.\n
</span><span style="color:#323232;">Jan 1 12:57:19 halnas kernel: [ 1.119609] pcieport 0000:00:1b.0: DPC: error containment capabilities: Int Msg #0, RPExt+ PoisonedTLP+ SwTrigger+ RP PIO Log 4, DL_ActiveErr+\n
</span><span style="color:#323232;">Jan 1 12:57:19 halnas kernel: [ 1.120020] pcieport 0000:00:1d.2: DPC: error containment capabilities: Int Msg #0, RPExt+ PoisonedTLP+ SwTrigger+ RP PIO Log 4, DL_ActiveErr+\n
</span><span style="color:#323232;">Jan 1 12:57:19 halnas kernel: [ 1.120315] pcieport 0000:00:1d.3: DPC: error containment capabilities: Int Msg #0, RPExt+ PoisonedTLP+ SwTrigger+ RP PIO Log 4, DL_ActiveErr+\n
</span><span style="color:#323232;">Jan 1 22:59:08 halnas kernel: [ 1.119415] pcieport 0000:00:1b.0: DPC: error containment capabilities: Int Msg #0, RPExt+ PoisonedTLP+ SwTrigger+ RP PIO Log 4, DL_ActiveErr+\n
</span><span style="color:#323232;">Jan 1 22:59:08 halnas kernel: [ 1.119814] pcieport 0000:00:1d.2: DPC: error containment capabilities: Int Msg #0, RPExt+ PoisonedTLP+ SwTrigger+ RP PIO Log 4, DL_ActiveErr+\n
</span><span style="color:#323232;">Jan 1 22:59:08 halnas kernel: [ 1.120112] pcieport 0000:00:1d.3: DPC: error containment capabilities: Int Msg #0, RPExt+ PoisonedTLP+ SwTrigger+ RP PIO Log 4, DL_ActiveErr+\n
</span><span style="color:#323232;">Jan 1 22:59:08 halnas systemd[1]: Condition check resulted in Process error reports when automatic reporting is enabled (file watch) being skipped.\n
</span><span style="color:#323232;">Jan 1 22:59:08 halnas systemd[1]: Condition check resulted in Process error reports when automatic reporting is enabled (timer based) being skipped.\n
</span><span style="color:#323232;">Jan 2 02:23:18 halnas kernel: [12293.792282] gdbus[2809399]: segfault at 7ff71a8272e8 ip 00007ff7186f8045 sp 00007fffd5088de0 error 4 in libgio-2.0.so.0.7200.4[7ff718688000+111000]\n
</span><span style="color:#323232;">Jan 2 02:23:22 halnas kernel: [12297.315463] unattended-upgr[2810494]: segfault at 7f4c1e8552e8 ip 00007f4c1c726045 sp 00007ffd1b866230 error 4 in libgio-2.0.so.0.7200.4[7f4c1c6b6000+111000]\n
</span><span style="color:#323232;">Jan 2 03:46:29 halnas kernel: [17284.221594] #PF: error_code(0x0002) - not-present page\n
</span><span style="color:#323232;">Jan 2 06:09:50 halnas kernel: [25885.115060] unattended-upgr[4109474]: segfault at 7faa356252e8 ip 00007faa334f6045 sp 00007ffefed011a0 error 4 in libgio-2.0.so.0.7200.4[7faa33486000+111000]\n
</span><span style="color:#323232;">Jan 2 07:07:53 halnas kernel: [29368.241593] unattended-upgr[4109637]: segfault at 7f73f756c2e8 ip 00007f73f543d045 sp 00007ffc61f04ea0 error 4 in libgio-2.0.so.0.7200.4[7f73f53cd000+111000]\n
</span><span style="color:#323232;">Jan 2 09:12:52 halnas kernel: [36867.632220] pool-fwupdmgr[4109819]: segfault at 7fcf244832e8 ip 00007fcf22354045 sp 00007fcf1dc00770 error 4 in libgio-2.0.so.0.7200.4[7fcf222e4000+111000]\n
</span><span style="color:#323232;">Jan 2 12:37:50 halnas kernel: [49165.218100] #PF: error_code(0x0002) - not-present page\n
</span><span style="color:#323232;">Jan 2 19:57:53 halnas kernel: [75568.443218] unattended-upgr[4110958]: segfault at 7fc4cab112e8 ip 00007fc4c89e2045 sp 00007fffb4ae2d90 error 4 in libgio-2.0.so.0.7200.4[7fc4c8972000+111000]\n
</span><span style="color:#323232;">Jan 3 00:54:51 halnas snapd[1367]: stateengine.go:149: state ensure error: Post "https://api.snapcraft.io/v2/snaps/refresh": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)\n
</span>\n
```
"""
+lang: "en"
+isAdult: false
+favouriteCount: 0
+score: 0
+lastActive: DateTime @1704372538 {#4614
date: 2024-01-04 13:48:58.0 +01:00
}
+ip: null
+tags: [
"323232"
"pf"
]
+mentions: [
"@isles@lemmy.world"
"@Zoidberg@lemm.ee"
]
+children: Doctrine\ORM\PersistentCollection {#4617 …}
+nested: Doctrine\ORM\PersistentCollection {#4619 …}
+votes: Doctrine\ORM\PersistentCollection {#4621 …}
+reports: Doctrine\ORM\PersistentCollection {#4623 …}
+favourites: Doctrine\ORM\PersistentCollection {#4625 …}
+notifications: Doctrine\ORM\PersistentCollection {#4627 …}
-id: 279353
-bodyTs: "'+111000':350,376,418,444,470,512 '-9':2 '/v2/snaps/refresh':528 '0':129,157,185,213,241,269 '0.7200.4':348,374,416,442,468,510 '00':121,149,177,205,233,261,515 '0000':120,148,176,204,232,260 '00007f4c1c726045':367 '00007f73f543d045':435 '00007faa334f6045':409 '00007fc4c89e2045':503 '00007fcf1dc00770':463 '00007fcf22354045':461 '00007ff7186f8045':341 '00007ffc61f04ea0':437 '00007ffd1b866230':369 '00007ffefed011a0':411 '00007fffb4ae2d90':505 '00007fffd5088de0':343 '02':329,353 '03':379 '06':395 '07':421,422 '08':199,227,255,283,307 '09':396,447 '0x0002':58,388,482 '1':64,70,88,94,112,140,168,196,224,252,280,286,304,310 '1.119415':202 '1.119609':118 '1.119814':230 '1.120020':146 '1.120112':258 '1.120315':174 '12':65,89,113,141,169,448,473 '12293.792282':334 '12297.315463':358 '1367':520 '16':49 '17284.221594':384 '18':331 '19':67,91,115,143,171,489 '1b.0':122,206 '1d.2':150,234 '1d.3':178,262 '2':328,352,378,394,420,446,472,488 '22':197,225,253,281,305,355 '23':330,354 '25885.115060':400 '2809399':336 '2810494':362 '29':381 '29368.241593':426 '3':514 '31':48 '34':51 '36867.632220':452 '37':474 '4':136,164,192,220,248,276,345,371,413,439,465,507 '4109474':404 '4109637':430 '4109819':456 '4110958':498 '46':380 '49165.218100':478 '50':397,475 '51':517 '52':449 '53':50,423,491 '54':516 '54537.789982':54 '57':66,90,114,142,170,490 '59':198,226,254,282,306 '75568.443218':494 '7f4c1c6b6000':375 '7f4c1e8552e8':365 '7f73f53cd000':443 '7f73f756c2e8':433 '7faa33486000':417 '7faa356252e8':407 '7fc4c8972000':511 '7fc4cab112e8':501 '7fcf222e4000':469 '7fcf244832e8':459 '7ff718688000':349 '7ff71a8272e8':339 'activeerr':138,166,194,222,250,278 'api.snapcraft.io':527 'api.snapcraft.io/v2/snaps/refresh':526 'appar':20 'automat':79,103,295,319 'await':539 'base':108,324 'cancel':531 'capabl':126,154,182,210,238,266 'check':72,96,288,312 'client.timeout':536 'code':57,387,481 'condit':71,95,287,311 'connect':535 'contain':125,153,181,209,237,265 'dec':47 'dl':137,165,193,221,249,277 'dpc':123,151,179,207,235,263 'enabl':82,106,298,322 'ensur':523 'error':46,56,76,100,124,152,180,208,236,264,292,316,344,370,386,412,438,464,480,506,524 'exceed':537 'fail':40 'file':83,299 'fwupdmgr':455 'gdbus':335 'halna':52,68,92,116,144,172,200,228,256,284,308,332,356,382,398,424,450,476,492,518 'hang':31 'header':540 'instanc':13 'int':127,155,183,211,239,267 'ip':340,366,408,434,460,502 'issu':24 'jan':63,87,111,139,167,195,223,251,279,303,327,351,377,393,419,445,471,487,513 'kernel':53,117,145,173,201,229,257,333,357,383,399,425,451,477,493 'kill':1,10,38 'libgio-2.0.so':347,373,415,441,467,509 'log':135,163,191,219,247,275 'msg':128,156,184,212,240,268 'net/http':529 'not-pres':59,389,483 'open':22 'page':62,392,486 'pcieport':119,147,175,203,231,259 'pf':55,385,479 'pio':134,162,190,218,246,274 'poisonedtlp':131,159,187,215,243,271 'pool':454 'pool-fwupdmgr':453 'post':525 'present':61,391,485 'process':75,99,291,315 'reboot':25 'report':77,80,101,104,293,296,317,320 'request':530 'result':73,97,289,313 'rp':133,161,189,217,245,273 'rpext':130,158,186,214,242,270 'rsync':15,35 'seem':29 'segfault':337,363,405,431,457,499 'skip':86,110,302,326 'snapd':519 'sp':342,368,410,436,462,504 'state':522 'stateengine.go:149':521 'stuck':21 'suggest':8 'swtrigger':132,160,188,216,244,272 'syslog':45 'system':28 'systemd':69,93,285,309 'test':4 'thank':5 'timer':107,323 'two':19 'unattend':360,402,428,496 'unattended-upgr':359,401,427,495 'unmount':42 'upgr':361,403,429,497 'wait':33,533 'watch':84,300 'zpool':44"
+ranking: 0
+commentCount: 0
+upVotes: 0
+downVotes: 0
+visibility: "visible "
+apId: "https://lemmy.world/comment/6399582"
+editedAt: null
+createdAt: DateTimeImmutable @1704372538 {#4615
date: 2024-01-04 13:48:58.0 +01:00
}
} |
|
Show voter details
|
53 |
DENIED
|
ROLE_USER
|
null |
|
Show voter details
|
54 |
DENIED
|
moderate
|
App\Entity\EntryComment {#4456
+user: App\Entity\User {#4469 …}
+entry: App\Entity\Entry {#2412
+user: Proxies\__CG__\App\Entity\User {#1970 …}
+magazine: App\Entity\Magazine {#266
+icon: Proxies\__CG__\App\Entity\Image {#247 …}
+name: "selfhosted@lemmy.world"
+title: "selfhosted"
+description: """
A place to share alternatives to popular online services that can be self-hosted without giving up privacy or locking you into a service you don’t control.\n
\n
Rules:\n
\n
- Be civil: we’re here to support and learn from one another. Insults won’t be tolerated. Flame wars are frowned upon.\n
- No spam posting.\n
- Don’t duplicate the full text of your blog or github here. Just post the link for folks to click.\n
- Submission headline should match the article title (don’t cherry-pick information from the title to fit your agenda).\n
- No trolling.\n
\n
Resources:\n
\n
- [awesome-selfhosted software](https://github.com/awesome-selfhosted/awesome-selfhosted)\n
- [awesome-sysadmin](https://github.com/awesome-foss/awesome-sysadmin) resources\n
- [Self-Hosted Podcast from Jupiter Broadcasting](https://selfhosted.show)\n
\n
> Any issues on the community? Report it using the report flag.\n
\n
> Questions? DM the mods!
"""
+rules: null
+subscriptionsCount: 1
+entryCount: 222
+entryCommentCount: 3916
+postCount: 0
+postCommentCount: 0
+isAdult: false
+customCss: null
+lastActive: DateTime @1729582735 {#276
date: 2024-10-22 09:38:55.0 +02:00
}
+markedForDeletionAt: null
+tags: null
+moderators: Doctrine\ORM\PersistentCollection {#238 …}
+ownershipRequests: Doctrine\ORM\PersistentCollection {#234 …}
+moderatorRequests: Doctrine\ORM\PersistentCollection {#223 …}
+entries: Doctrine\ORM\PersistentCollection {#181 …}
+posts: Doctrine\ORM\PersistentCollection {#139 …}
+subscriptions: Doctrine\ORM\PersistentCollection {#201 …}
+bans: Doctrine\ORM\PersistentCollection {#118 …}
+reports: Doctrine\ORM\PersistentCollection {#104 …}
+badges: Doctrine\ORM\PersistentCollection {#82 …}
+logs: Doctrine\ORM\PersistentCollection {#72 …}
+awards: Doctrine\ORM\PersistentCollection {#61 …}
+categories: Doctrine\ORM\PersistentCollection {#1820 …}
-id: 120
+apId: "selfhosted@lemmy.world"
+apProfileId: "https://lemmy.world/c/selfhosted"
+apPublicUrl: "https://lemmy.world/c/selfhosted"
+apFollowersUrl: "https://lemmy.world/c/selfhosted/followers"
+apInboxUrl: "https://lemmy.world/inbox"
+apDomain: "lemmy.world"
+apPreferredUsername: "selfhosted"
+apDiscoverable: true
+apManuallyApprovesFollowers: null
+privateKey: null
+publicKey: null
+apFetchedAt: DateTime @1703473826 {#270
date: 2023-12-25 04:10:26.0 +01:00
}
+apDeletedAt: null
+apTimeoutAt: null
+visibility: "visible "
+createdAt: DateTimeImmutable @1703473826 {#272
date: 2023-12-25 04:10:26.0 +01:00
}
}
+image: null
+domain: Proxies\__CG__\App\Entity\Domain {#1915 …}
+slug: "Question-ZFS-and-rsync"
+title: "Question - ZFS and rsync"
+url: null
+body: """
Hey fellow Selfhosters! I need some help, I think, and searching isn’t yielding what I’m hoping for.\n
\n
I recently built a new NAS for my network with 4x 18TB drives in a ZFS raidz1 pool. I previously have been using an external USB 12TB harddrive attached to a different machine.\n
\n
I’ve been attempting to use rsync to get the 12TB drive copied over to the new pool and things go great for the first 30-45 minutes. At that point, the current copy speed diminishes and 4 current files in progress sit at 100% done. Eventually, I’ve had to reboot the machine, because the zpool doesn’t appear accessible any longer. After reboot, the pool appears fine, no faults, and I can resume rsync for a while.\n
\n
EDIT: Of note, the rsync process seems to stall and I can’t get it to respect SIGINT or Ctrl+C. I can SSH in separately and running `zpool status` hangs with no output.\n
\n
While the workaround seems to be partially successful, the point of using rsync is to make it fairly hands-free and it’s been a week long process to copy the 3TB that I have now. I don’t think my zpool should be disappearing like that! Makes me nervous about the long-term viability. I don’t think I’m ready to drop down on Unraid.\n
\n
rsync is being initiated from the NAS to copy from the old server, am I better off “pushing” than “pulling”? I can’t imagine it’d make much difference.\n
\n
Could my drives be bad? How could I tell? They’re attached to a 10 port SATA card, could that be defective? How would I tell?\n
\n
Thanks for any help! I’ve dabbled in linux for a long time, but I’m far from proficient, so I don’t really know the intricacies of dmesg et al.
"""
+type: "article"
+lang: "en"
+isOc: false
+hasEmbed: false
+commentCount: 17
+favouriteCount: 36
+score: 0
+isAdult: false
+sticky: false
+lastActive: DateTime @1711558563 {#2410
date: 2024-03-27 17:56:03.0 +01:00
}
+ip: null
+adaAmount: 0
+tags: null
+mentions: null
+comments: Doctrine\ORM\PersistentCollection {#1884 …}
+votes: Doctrine\ORM\PersistentCollection {#1973 …}
+reports: Doctrine\ORM\PersistentCollection {#1959 …}
+favourites: Doctrine\ORM\PersistentCollection {#1927 …}
+notifications: Doctrine\ORM\PersistentCollection {#2442 …}
+badges: Doctrine\ORM\PersistentCollection {#2440 …}
+children: []
-id: 25403
-titleTs: "'question':1 'rsync':4 'zfs':2"
-bodyTs: "'-45':79 '10':278 '100':97 '12tb':46,63 '18tb':31 '30':78 '3tb':198 '4':90 '4x':30 'access':113 'al':320 'appear':112,120 'attach':48,275 'attempt':56 'bad':268 'better':250 'built':22 'c':152 'card':281 'copi':65,86,196,243 'could':264,270,282 'ctrl':151 'current':85,91 'd':260 'dabbl':296 'defect':285 'differ':51,263 'diminish':88 'disappear':211 'dmesg':318 'doesn':110 'done':98 'drive':32,64,266 'drop':231 'edit':132 'et':319 'eventu':99 'extern':44 'fair':183 'far':306 'fault':123 'fellow':2 'file':92 'fine':121 'first':77 'free':186 'get':61,145 'go':73 'great':74 'hand':185 'hands-fre':184 'hang':162 'harddriv':47 'help':7,293 'hey':1 'hope':18 'imagin':258 'initi':238 'intricaci':316 'isn':12 'know':314 'like':212 'linux':298 'long':193,220,301 'long-term':219 'longer':115 'm':17,228,305 'machin':52,106 'make':181,214,261 'minut':80 'much':262 'nas':25,241 'need':5 'nervous':216 'network':28 'new':24,69 'note':134 'old':246 'output':165 'partial':172 'point':83,175 'pool':37,70,119 'port':279 'previous':39 'process':137,194 'profici':308 'progress':94 'pull':254 'push':252 'raidz1':36 're':274 'readi':229 'realli':313 'reboot':104,117 'recent':21 'respect':148 'resum':127 'rsync':59,128,136,178,235 'run':159 'sata':280 'search':11 'seem':138,169 'selfhost':3 'separ':157 'server':247 'sigint':149 'sit':95 'speed':87 'ssh':155 'stall':140 'status':161 'success':173 'tell':272,289 'term':221 'thank':290 'thing':72 'think':9,206,226 'time':302 'unraid':234 'usb':45 'use':42,58,177 've':54,101,295 'viabil':222 'week':192 'workaround':168 'would':287 'yield':14 'zfs':35 'zpool':109,160,208"
+cross: false
+upVotes: 0
+downVotes: 0
+ranking: 1703862554
+visibility: "visible "
+apId: "https://lemmy.world/post/10061829"
+editedAt: DateTimeImmutable @1708796667 {#1850
date: 2024-02-24 18:44:27.0 +01:00
}
+createdAt: DateTimeImmutable @1703776154 {#2420
date: 2023-12-28 16:09:14.0 +01:00
}
}
+magazine: App\Entity\Magazine {#266}
+image: null
+parent: null
+root: null
+body: """
I don’t have practical experience with ZFS, but my understanding is that it uses RAM a lot… if that’s new, it might be worth checking the RAM by booting up memtest (for example) and just ruling that out.\n
\n
Maybe also worth watching the system with `nmon` or `htop` (running in another `tmux` / `screen` pane) at the beginning of the next session, then when you think it’s jammed up, see what looks different…
"""
+lang: "en"
+isAdult: false
+favouriteCount: 1
+score: 0
+lastActive: DateTime @1711282496 {#4451
date: 2024-03-24 13:14:56.0 +01:00
}
+ip: null
+tags: null
+mentions: [
"@isles@lemmy.world"
]
+children: Doctrine\ORM\PersistentCollection {#4457 …}
+nested: Doctrine\ORM\PersistentCollection {#4459 …}
+votes: Doctrine\ORM\PersistentCollection {#4461 …}
+reports: Doctrine\ORM\PersistentCollection {#4463 …}
+favourites: Doctrine\ORM\PersistentCollection {#4465 …}
+notifications: Doctrine\ORM\PersistentCollection {#4467 …}
-id: 260307
-bodyTs: "'also':42 'anoth':53 'begin':59 'boot':31 'check':27 'differ':75 'exampl':35 'experi':6 'htop':50 'jam':70 'look':74 'lot':18 'mayb':41 'memtest':33 'might':24 'new':22 'next':62 'nmon':48 'pane':56 'practic':5 'ram':16,29 'rule':38 'run':51 'screen':55 'see':72 'session':63 'system':46 'think':67 'tmux':54 'understand':11 'use':15 'watch':44 'worth':26,43 'zfs':8"
+ranking: 0
+commentCount: 0
+upVotes: 0
+downVotes: 0
+visibility: "visible "
+apId: "https://feddit.uk/comment/5443383"
+editedAt: null
+createdAt: DateTimeImmutable @1703794455 {#4452
date: 2023-12-28 21:14:15.0 +01:00
}
} |
|
Show voter details
|
55 |
DENIED
|
edit
|
App\Entity\EntryComment {#4456
+user: App\Entity\User {#4469 …}
+entry: App\Entity\Entry {#2412
+user: Proxies\__CG__\App\Entity\User {#1970 …}
+magazine: App\Entity\Magazine {#266
+icon: Proxies\__CG__\App\Entity\Image {#247 …}
+name: "selfhosted@lemmy.world"
+title: "selfhosted"
+description: """
A place to share alternatives to popular online services that can be self-hosted without giving up privacy or locking you into a service you don’t control.\n
\n
Rules:\n
\n
- Be civil: we’re here to support and learn from one another. Insults won’t be tolerated. Flame wars are frowned upon.\n
- No spam posting.\n
- Don’t duplicate the full text of your blog or github here. Just post the link for folks to click.\n
- Submission headline should match the article title (don’t cherry-pick information from the title to fit your agenda).\n
- No trolling.\n
\n
Resources:\n
\n
- [awesome-selfhosted software](https://github.com/awesome-selfhosted/awesome-selfhosted)\n
- [awesome-sysadmin](https://github.com/awesome-foss/awesome-sysadmin) resources\n
- [Self-Hosted Podcast from Jupiter Broadcasting](https://selfhosted.show)\n
\n
> Any issues on the community? Report it using the report flag.\n
\n
> Questions? DM the mods!
"""
+rules: null
+subscriptionsCount: 1
+entryCount: 222
+entryCommentCount: 3916
+postCount: 0
+postCommentCount: 0
+isAdult: false
+customCss: null
+lastActive: DateTime @1729582735 {#276
date: 2024-10-22 09:38:55.0 +02:00
}
+markedForDeletionAt: null
+tags: null
+moderators: Doctrine\ORM\PersistentCollection {#238 …}
+ownershipRequests: Doctrine\ORM\PersistentCollection {#234 …}
+moderatorRequests: Doctrine\ORM\PersistentCollection {#223 …}
+entries: Doctrine\ORM\PersistentCollection {#181 …}
+posts: Doctrine\ORM\PersistentCollection {#139 …}
+subscriptions: Doctrine\ORM\PersistentCollection {#201 …}
+bans: Doctrine\ORM\PersistentCollection {#118 …}
+reports: Doctrine\ORM\PersistentCollection {#104 …}
+badges: Doctrine\ORM\PersistentCollection {#82 …}
+logs: Doctrine\ORM\PersistentCollection {#72 …}
+awards: Doctrine\ORM\PersistentCollection {#61 …}
+categories: Doctrine\ORM\PersistentCollection {#1820 …}
-id: 120
+apId: "selfhosted@lemmy.world"
+apProfileId: "https://lemmy.world/c/selfhosted"
+apPublicUrl: "https://lemmy.world/c/selfhosted"
+apFollowersUrl: "https://lemmy.world/c/selfhosted/followers"
+apInboxUrl: "https://lemmy.world/inbox"
+apDomain: "lemmy.world"
+apPreferredUsername: "selfhosted"
+apDiscoverable: true
+apManuallyApprovesFollowers: null
+privateKey: null
+publicKey: null
+apFetchedAt: DateTime @1703473826 {#270
date: 2023-12-25 04:10:26.0 +01:00
}
+apDeletedAt: null
+apTimeoutAt: null
+visibility: "visible "
+createdAt: DateTimeImmutable @1703473826 {#272
date: 2023-12-25 04:10:26.0 +01:00
}
}
+image: null
+domain: Proxies\__CG__\App\Entity\Domain {#1915 …}
+slug: "Question-ZFS-and-rsync"
+title: "Question - ZFS and rsync"
+url: null
+body: """
Hey fellow Selfhosters! I need some help, I think, and searching isn’t yielding what I’m hoping for.\n
\n
I recently built a new NAS for my network with 4x 18TB drives in a ZFS raidz1 pool. I previously have been using an external USB 12TB harddrive attached to a different machine.\n
\n
I’ve been attempting to use rsync to get the 12TB drive copied over to the new pool and things go great for the first 30-45 minutes. At that point, the current copy speed diminishes and 4 current files in progress sit at 100% done. Eventually, I’ve had to reboot the machine, because the zpool doesn’t appear accessible any longer. After reboot, the pool appears fine, no faults, and I can resume rsync for a while.\n
\n
EDIT: Of note, the rsync process seems to stall and I can’t get it to respect SIGINT or Ctrl+C. I can SSH in separately and running `zpool status` hangs with no output.\n
\n
While the workaround seems to be partially successful, the point of using rsync is to make it fairly hands-free and it’s been a week long process to copy the 3TB that I have now. I don’t think my zpool should be disappearing like that! Makes me nervous about the long-term viability. I don’t think I’m ready to drop down on Unraid.\n
\n
rsync is being initiated from the NAS to copy from the old server, am I better off “pushing” than “pulling”? I can’t imagine it’d make much difference.\n
\n
Could my drives be bad? How could I tell? They’re attached to a 10 port SATA card, could that be defective? How would I tell?\n
\n
Thanks for any help! I’ve dabbled in linux for a long time, but I’m far from proficient, so I don’t really know the intricacies of dmesg et al.
"""
+type: "article"
+lang: "en"
+isOc: false
+hasEmbed: false
+commentCount: 17
+favouriteCount: 36
+score: 0
+isAdult: false
+sticky: false
+lastActive: DateTime @1711558563 {#2410
date: 2024-03-27 17:56:03.0 +01:00
}
+ip: null
+adaAmount: 0
+tags: null
+mentions: null
+comments: Doctrine\ORM\PersistentCollection {#1884 …}
+votes: Doctrine\ORM\PersistentCollection {#1973 …}
+reports: Doctrine\ORM\PersistentCollection {#1959 …}
+favourites: Doctrine\ORM\PersistentCollection {#1927 …}
+notifications: Doctrine\ORM\PersistentCollection {#2442 …}
+badges: Doctrine\ORM\PersistentCollection {#2440 …}
+children: []
-id: 25403
-titleTs: "'question':1 'rsync':4 'zfs':2"
-bodyTs: "'-45':79 '10':278 '100':97 '12tb':46,63 '18tb':31 '30':78 '3tb':198 '4':90 '4x':30 'access':113 'al':320 'appear':112,120 'attach':48,275 'attempt':56 'bad':268 'better':250 'built':22 'c':152 'card':281 'copi':65,86,196,243 'could':264,270,282 'ctrl':151 'current':85,91 'd':260 'dabbl':296 'defect':285 'differ':51,263 'diminish':88 'disappear':211 'dmesg':318 'doesn':110 'done':98 'drive':32,64,266 'drop':231 'edit':132 'et':319 'eventu':99 'extern':44 'fair':183 'far':306 'fault':123 'fellow':2 'file':92 'fine':121 'first':77 'free':186 'get':61,145 'go':73 'great':74 'hand':185 'hands-fre':184 'hang':162 'harddriv':47 'help':7,293 'hey':1 'hope':18 'imagin':258 'initi':238 'intricaci':316 'isn':12 'know':314 'like':212 'linux':298 'long':193,220,301 'long-term':219 'longer':115 'm':17,228,305 'machin':52,106 'make':181,214,261 'minut':80 'much':262 'nas':25,241 'need':5 'nervous':216 'network':28 'new':24,69 'note':134 'old':246 'output':165 'partial':172 'point':83,175 'pool':37,70,119 'port':279 'previous':39 'process':137,194 'profici':308 'progress':94 'pull':254 'push':252 'raidz1':36 're':274 'readi':229 'realli':313 'reboot':104,117 'recent':21 'respect':148 'resum':127 'rsync':59,128,136,178,235 'run':159 'sata':280 'search':11 'seem':138,169 'selfhost':3 'separ':157 'server':247 'sigint':149 'sit':95 'speed':87 'ssh':155 'stall':140 'status':161 'success':173 'tell':272,289 'term':221 'thank':290 'thing':72 'think':9,206,226 'time':302 'unraid':234 'usb':45 'use':42,58,177 've':54,101,295 'viabil':222 'week':192 'workaround':168 'would':287 'yield':14 'zfs':35 'zpool':109,160,208"
+cross: false
+upVotes: 0
+downVotes: 0
+ranking: 1703862554
+visibility: "visible "
+apId: "https://lemmy.world/post/10061829"
+editedAt: DateTimeImmutable @1708796667 {#1850
date: 2024-02-24 18:44:27.0 +01:00
}
+createdAt: DateTimeImmutable @1703776154 {#2420
date: 2023-12-28 16:09:14.0 +01:00
}
}
+magazine: App\Entity\Magazine {#266}
+image: null
+parent: null
+root: null
+body: """
I don’t have practical experience with ZFS, but my understanding is that it uses RAM a lot… if that’s new, it might be worth checking the RAM by booting up memtest (for example) and just ruling that out.\n
\n
Maybe also worth watching the system with `nmon` or `htop` (running in another `tmux` / `screen` pane) at the beginning of the next session, then when you think it’s jammed up, see what looks different…
"""
+lang: "en"
+isAdult: false
+favouriteCount: 1
+score: 0
+lastActive: DateTime @1711282496 {#4451
date: 2024-03-24 13:14:56.0 +01:00
}
+ip: null
+tags: null
+mentions: [
"@isles@lemmy.world"
]
+children: Doctrine\ORM\PersistentCollection {#4457 …}
+nested: Doctrine\ORM\PersistentCollection {#4459 …}
+votes: Doctrine\ORM\PersistentCollection {#4461 …}
+reports: Doctrine\ORM\PersistentCollection {#4463 …}
+favourites: Doctrine\ORM\PersistentCollection {#4465 …}
+notifications: Doctrine\ORM\PersistentCollection {#4467 …}
-id: 260307
-bodyTs: "'also':42 'anoth':53 'begin':59 'boot':31 'check':27 'differ':75 'exampl':35 'experi':6 'htop':50 'jam':70 'look':74 'lot':18 'mayb':41 'memtest':33 'might':24 'new':22 'next':62 'nmon':48 'pane':56 'practic':5 'ram':16,29 'rule':38 'run':51 'screen':55 'see':72 'session':63 'system':46 'think':67 'tmux':54 'understand':11 'use':15 'watch':44 'worth':26,43 'zfs':8"
+ranking: 0
+commentCount: 0
+upVotes: 0
+downVotes: 0
+visibility: "visible "
+apId: "https://feddit.uk/comment/5443383"
+editedAt: null
+createdAt: DateTimeImmutable @1703794455 {#4452
date: 2023-12-28 21:14:15.0 +01:00
}
} |
|
Show voter details
|
56 |
DENIED
|
moderate
|
App\Entity\EntryComment {#4456
+user: App\Entity\User {#4469 …}
+entry: App\Entity\Entry {#2412
+user: Proxies\__CG__\App\Entity\User {#1970 …}
+magazine: App\Entity\Magazine {#266
+icon: Proxies\__CG__\App\Entity\Image {#247 …}
+name: "selfhosted@lemmy.world"
+title: "selfhosted"
+description: """
A place to share alternatives to popular online services that can be self-hosted without giving up privacy or locking you into a service you don’t control.\n
\n
Rules:\n
\n
- Be civil: we’re here to support and learn from one another. Insults won’t be tolerated. Flame wars are frowned upon.\n
- No spam posting.\n
- Don’t duplicate the full text of your blog or github here. Just post the link for folks to click.\n
- Submission headline should match the article title (don’t cherry-pick information from the title to fit your agenda).\n
- No trolling.\n
\n
Resources:\n
\n
- [awesome-selfhosted software](https://github.com/awesome-selfhosted/awesome-selfhosted)\n
- [awesome-sysadmin](https://github.com/awesome-foss/awesome-sysadmin) resources\n
- [Self-Hosted Podcast from Jupiter Broadcasting](https://selfhosted.show)\n
\n
> Any issues on the community? Report it using the report flag.\n
\n
> Questions? DM the mods!
"""
+rules: null
+subscriptionsCount: 1
+entryCount: 222
+entryCommentCount: 3916
+postCount: 0
+postCommentCount: 0
+isAdult: false
+customCss: null
+lastActive: DateTime @1729582735 {#276
date: 2024-10-22 09:38:55.0 +02:00
}
+markedForDeletionAt: null
+tags: null
+moderators: Doctrine\ORM\PersistentCollection {#238 …}
+ownershipRequests: Doctrine\ORM\PersistentCollection {#234 …}
+moderatorRequests: Doctrine\ORM\PersistentCollection {#223 …}
+entries: Doctrine\ORM\PersistentCollection {#181 …}
+posts: Doctrine\ORM\PersistentCollection {#139 …}
+subscriptions: Doctrine\ORM\PersistentCollection {#201 …}
+bans: Doctrine\ORM\PersistentCollection {#118 …}
+reports: Doctrine\ORM\PersistentCollection {#104 …}
+badges: Doctrine\ORM\PersistentCollection {#82 …}
+logs: Doctrine\ORM\PersistentCollection {#72 …}
+awards: Doctrine\ORM\PersistentCollection {#61 …}
+categories: Doctrine\ORM\PersistentCollection {#1820 …}
-id: 120
+apId: "selfhosted@lemmy.world"
+apProfileId: "https://lemmy.world/c/selfhosted"
+apPublicUrl: "https://lemmy.world/c/selfhosted"
+apFollowersUrl: "https://lemmy.world/c/selfhosted/followers"
+apInboxUrl: "https://lemmy.world/inbox"
+apDomain: "lemmy.world"
+apPreferredUsername: "selfhosted"
+apDiscoverable: true
+apManuallyApprovesFollowers: null
+privateKey: null
+publicKey: null
+apFetchedAt: DateTime @1703473826 {#270
date: 2023-12-25 04:10:26.0 +01:00
}
+apDeletedAt: null
+apTimeoutAt: null
+visibility: "visible "
+createdAt: DateTimeImmutable @1703473826 {#272
date: 2023-12-25 04:10:26.0 +01:00
}
}
+image: null
+domain: Proxies\__CG__\App\Entity\Domain {#1915 …}
+slug: "Question-ZFS-and-rsync"
+title: "Question - ZFS and rsync"
+url: null
+body: """
Hey fellow Selfhosters! I need some help, I think, and searching isn’t yielding what I’m hoping for.\n
\n
I recently built a new NAS for my network with 4x 18TB drives in a ZFS raidz1 pool. I previously have been using an external USB 12TB harddrive attached to a different machine.\n
\n
I’ve been attempting to use rsync to get the 12TB drive copied over to the new pool and things go great for the first 30-45 minutes. At that point, the current copy speed diminishes and 4 current files in progress sit at 100% done. Eventually, I’ve had to reboot the machine, because the zpool doesn’t appear accessible any longer. After reboot, the pool appears fine, no faults, and I can resume rsync for a while.\n
\n
EDIT: Of note, the rsync process seems to stall and I can’t get it to respect SIGINT or Ctrl+C. I can SSH in separately and running `zpool status` hangs with no output.\n
\n
While the workaround seems to be partially successful, the point of using rsync is to make it fairly hands-free and it’s been a week long process to copy the 3TB that I have now. I don’t think my zpool should be disappearing like that! Makes me nervous about the long-term viability. I don’t think I’m ready to drop down on Unraid.\n
\n
rsync is being initiated from the NAS to copy from the old server, am I better off “pushing” than “pulling”? I can’t imagine it’d make much difference.\n
\n
Could my drives be bad? How could I tell? They’re attached to a 10 port SATA card, could that be defective? How would I tell?\n
\n
Thanks for any help! I’ve dabbled in linux for a long time, but I’m far from proficient, so I don’t really know the intricacies of dmesg et al.
"""
+type: "article"
+lang: "en"
+isOc: false
+hasEmbed: false
+commentCount: 17
+favouriteCount: 36
+score: 0
+isAdult: false
+sticky: false
+lastActive: DateTime @1711558563 {#2410
date: 2024-03-27 17:56:03.0 +01:00
}
+ip: null
+adaAmount: 0
+tags: null
+mentions: null
+comments: Doctrine\ORM\PersistentCollection {#1884 …}
+votes: Doctrine\ORM\PersistentCollection {#1973 …}
+reports: Doctrine\ORM\PersistentCollection {#1959 …}
+favourites: Doctrine\ORM\PersistentCollection {#1927 …}
+notifications: Doctrine\ORM\PersistentCollection {#2442 …}
+badges: Doctrine\ORM\PersistentCollection {#2440 …}
+children: []
-id: 25403
-titleTs: "'question':1 'rsync':4 'zfs':2"
-bodyTs: "'-45':79 '10':278 '100':97 '12tb':46,63 '18tb':31 '30':78 '3tb':198 '4':90 '4x':30 'access':113 'al':320 'appear':112,120 'attach':48,275 'attempt':56 'bad':268 'better':250 'built':22 'c':152 'card':281 'copi':65,86,196,243 'could':264,270,282 'ctrl':151 'current':85,91 'd':260 'dabbl':296 'defect':285 'differ':51,263 'diminish':88 'disappear':211 'dmesg':318 'doesn':110 'done':98 'drive':32,64,266 'drop':231 'edit':132 'et':319 'eventu':99 'extern':44 'fair':183 'far':306 'fault':123 'fellow':2 'file':92 'fine':121 'first':77 'free':186 'get':61,145 'go':73 'great':74 'hand':185 'hands-fre':184 'hang':162 'harddriv':47 'help':7,293 'hey':1 'hope':18 'imagin':258 'initi':238 'intricaci':316 'isn':12 'know':314 'like':212 'linux':298 'long':193,220,301 'long-term':219 'longer':115 'm':17,228,305 'machin':52,106 'make':181,214,261 'minut':80 'much':262 'nas':25,241 'need':5 'nervous':216 'network':28 'new':24,69 'note':134 'old':246 'output':165 'partial':172 'point':83,175 'pool':37,70,119 'port':279 'previous':39 'process':137,194 'profici':308 'progress':94 'pull':254 'push':252 'raidz1':36 're':274 'readi':229 'realli':313 'reboot':104,117 'recent':21 'respect':148 'resum':127 'rsync':59,128,136,178,235 'run':159 'sata':280 'search':11 'seem':138,169 'selfhost':3 'separ':157 'server':247 'sigint':149 'sit':95 'speed':87 'ssh':155 'stall':140 'status':161 'success':173 'tell':272,289 'term':221 'thank':290 'thing':72 'think':9,206,226 'time':302 'unraid':234 'usb':45 'use':42,58,177 've':54,101,295 'viabil':222 'week':192 'workaround':168 'would':287 'yield':14 'zfs':35 'zpool':109,160,208"
+cross: false
+upVotes: 0
+downVotes: 0
+ranking: 1703862554
+visibility: "visible "
+apId: "https://lemmy.world/post/10061829"
+editedAt: DateTimeImmutable @1708796667 {#1850
date: 2024-02-24 18:44:27.0 +01:00
}
+createdAt: DateTimeImmutable @1703776154 {#2420
date: 2023-12-28 16:09:14.0 +01:00
}
}
+magazine: App\Entity\Magazine {#266}
+image: null
+parent: null
+root: null
+body: """
I don’t have practical experience with ZFS, but my understanding is that it uses RAM a lot… if that’s new, it might be worth checking the RAM by booting up memtest (for example) and just ruling that out.\n
\n
Maybe also worth watching the system with `nmon` or `htop` (running in another `tmux` / `screen` pane) at the beginning of the next session, then when you think it’s jammed up, see what looks different…
"""
+lang: "en"
+isAdult: false
+favouriteCount: 1
+score: 0
+lastActive: DateTime @1711282496 {#4451
date: 2024-03-24 13:14:56.0 +01:00
}
+ip: null
+tags: null
+mentions: [
"@isles@lemmy.world"
]
+children: Doctrine\ORM\PersistentCollection {#4457 …}
+nested: Doctrine\ORM\PersistentCollection {#4459 …}
+votes: Doctrine\ORM\PersistentCollection {#4461 …}
+reports: Doctrine\ORM\PersistentCollection {#4463 …}
+favourites: Doctrine\ORM\PersistentCollection {#4465 …}
+notifications: Doctrine\ORM\PersistentCollection {#4467 …}
-id: 260307
-bodyTs: "'also':42 'anoth':53 'begin':59 'boot':31 'check':27 'differ':75 'exampl':35 'experi':6 'htop':50 'jam':70 'look':74 'lot':18 'mayb':41 'memtest':33 'might':24 'new':22 'next':62 'nmon':48 'pane':56 'practic':5 'ram':16,29 'rule':38 'run':51 'screen':55 'see':72 'session':63 'system':46 'think':67 'tmux':54 'understand':11 'use':15 'watch':44 'worth':26,43 'zfs':8"
+ranking: 0
+commentCount: 0
+upVotes: 0
+downVotes: 0
+visibility: "visible "
+apId: "https://feddit.uk/comment/5443383"
+editedAt: null
+createdAt: DateTimeImmutable @1703794455 {#4452
date: 2023-12-28 21:14:15.0 +01:00
}
} |
|
Show voter details
|
57 |
DENIED
|
ROLE_USER
|
null |
|
Show voter details
|
58 |
DENIED
|
moderate
|
App\Entity\EntryComment {#4571
+user: Proxies\__CG__\App\Entity\User {#1970 …}
+entry: App\Entity\Entry {#2412
+user: Proxies\__CG__\App\Entity\User {#1970 …}
+magazine: App\Entity\Magazine {#266
+icon: Proxies\__CG__\App\Entity\Image {#247 …}
+name: "selfhosted@lemmy.world"
+title: "selfhosted"
+description: """
A place to share alternatives to popular online services that can be self-hosted without giving up privacy or locking you into a service you don’t control.\n
\n
Rules:\n
\n
- Be civil: we’re here to support and learn from one another. Insults won’t be tolerated. Flame wars are frowned upon.\n
- No spam posting.\n
- Don’t duplicate the full text of your blog or github here. Just post the link for folks to click.\n
- Submission headline should match the article title (don’t cherry-pick information from the title to fit your agenda).\n
- No trolling.\n
\n
Resources:\n
\n
- [awesome-selfhosted software](https://github.com/awesome-selfhosted/awesome-selfhosted)\n
- [awesome-sysadmin](https://github.com/awesome-foss/awesome-sysadmin) resources\n
- [Self-Hosted Podcast from Jupiter Broadcasting](https://selfhosted.show)\n
\n
> Any issues on the community? Report it using the report flag.\n
\n
> Questions? DM the mods!
"""
+rules: null
+subscriptionsCount: 1
+entryCount: 222
+entryCommentCount: 3916
+postCount: 0
+postCommentCount: 0
+isAdult: false
+customCss: null
+lastActive: DateTime @1729582735 {#276
date: 2024-10-22 09:38:55.0 +02:00
}
+markedForDeletionAt: null
+tags: null
+moderators: Doctrine\ORM\PersistentCollection {#238 …}
+ownershipRequests: Doctrine\ORM\PersistentCollection {#234 …}
+moderatorRequests: Doctrine\ORM\PersistentCollection {#223 …}
+entries: Doctrine\ORM\PersistentCollection {#181 …}
+posts: Doctrine\ORM\PersistentCollection {#139 …}
+subscriptions: Doctrine\ORM\PersistentCollection {#201 …}
+bans: Doctrine\ORM\PersistentCollection {#118 …}
+reports: Doctrine\ORM\PersistentCollection {#104 …}
+badges: Doctrine\ORM\PersistentCollection {#82 …}
+logs: Doctrine\ORM\PersistentCollection {#72 …}
+awards: Doctrine\ORM\PersistentCollection {#61 …}
+categories: Doctrine\ORM\PersistentCollection {#1820 …}
-id: 120
+apId: "selfhosted@lemmy.world"
+apProfileId: "https://lemmy.world/c/selfhosted"
+apPublicUrl: "https://lemmy.world/c/selfhosted"
+apFollowersUrl: "https://lemmy.world/c/selfhosted/followers"
+apInboxUrl: "https://lemmy.world/inbox"
+apDomain: "lemmy.world"
+apPreferredUsername: "selfhosted"
+apDiscoverable: true
+apManuallyApprovesFollowers: null
+privateKey: null
+publicKey: null
+apFetchedAt: DateTime @1703473826 {#270
date: 2023-12-25 04:10:26.0 +01:00
}
+apDeletedAt: null
+apTimeoutAt: null
+visibility: "visible "
+createdAt: DateTimeImmutable @1703473826 {#272
date: 2023-12-25 04:10:26.0 +01:00
}
}
+image: null
+domain: Proxies\__CG__\App\Entity\Domain {#1915 …}
+slug: "Question-ZFS-and-rsync"
+title: "Question - ZFS and rsync"
+url: null
+body: """
Hey fellow Selfhosters! I need some help, I think, and searching isn’t yielding what I’m hoping for.\n
\n
I recently built a new NAS for my network with 4x 18TB drives in a ZFS raidz1 pool. I previously have been using an external USB 12TB harddrive attached to a different machine.\n
\n
I’ve been attempting to use rsync to get the 12TB drive copied over to the new pool and things go great for the first 30-45 minutes. At that point, the current copy speed diminishes and 4 current files in progress sit at 100% done. Eventually, I’ve had to reboot the machine, because the zpool doesn’t appear accessible any longer. After reboot, the pool appears fine, no faults, and I can resume rsync for a while.\n
\n
EDIT: Of note, the rsync process seems to stall and I can’t get it to respect SIGINT or Ctrl+C. I can SSH in separately and running `zpool status` hangs with no output.\n
\n
While the workaround seems to be partially successful, the point of using rsync is to make it fairly hands-free and it’s been a week long process to copy the 3TB that I have now. I don’t think my zpool should be disappearing like that! Makes me nervous about the long-term viability. I don’t think I’m ready to drop down on Unraid.\n
\n
rsync is being initiated from the NAS to copy from the old server, am I better off “pushing” than “pulling”? I can’t imagine it’d make much difference.\n
\n
Could my drives be bad? How could I tell? They’re attached to a 10 port SATA card, could that be defective? How would I tell?\n
\n
Thanks for any help! I’ve dabbled in linux for a long time, but I’m far from proficient, so I don’t really know the intricacies of dmesg et al.
"""
+type: "article"
+lang: "en"
+isOc: false
+hasEmbed: false
+commentCount: 17
+favouriteCount: 36
+score: 0
+isAdult: false
+sticky: false
+lastActive: DateTime @1711558563 {#2410
date: 2024-03-27 17:56:03.0 +01:00
}
+ip: null
+adaAmount: 0
+tags: null
+mentions: null
+comments: Doctrine\ORM\PersistentCollection {#1884 …}
+votes: Doctrine\ORM\PersistentCollection {#1973 …}
+reports: Doctrine\ORM\PersistentCollection {#1959 …}
+favourites: Doctrine\ORM\PersistentCollection {#1927 …}
+notifications: Doctrine\ORM\PersistentCollection {#2442 …}
+badges: Doctrine\ORM\PersistentCollection {#2440 …}
+children: []
-id: 25403
-titleTs: "'question':1 'rsync':4 'zfs':2"
-bodyTs: "'-45':79 '10':278 '100':97 '12tb':46,63 '18tb':31 '30':78 '3tb':198 '4':90 '4x':30 'access':113 'al':320 'appear':112,120 'attach':48,275 'attempt':56 'bad':268 'better':250 'built':22 'c':152 'card':281 'copi':65,86,196,243 'could':264,270,282 'ctrl':151 'current':85,91 'd':260 'dabbl':296 'defect':285 'differ':51,263 'diminish':88 'disappear':211 'dmesg':318 'doesn':110 'done':98 'drive':32,64,266 'drop':231 'edit':132 'et':319 'eventu':99 'extern':44 'fair':183 'far':306 'fault':123 'fellow':2 'file':92 'fine':121 'first':77 'free':186 'get':61,145 'go':73 'great':74 'hand':185 'hands-fre':184 'hang':162 'harddriv':47 'help':7,293 'hey':1 'hope':18 'imagin':258 'initi':238 'intricaci':316 'isn':12 'know':314 'like':212 'linux':298 'long':193,220,301 'long-term':219 'longer':115 'm':17,228,305 'machin':52,106 'make':181,214,261 'minut':80 'much':262 'nas':25,241 'need':5 'nervous':216 'network':28 'new':24,69 'note':134 'old':246 'output':165 'partial':172 'point':83,175 'pool':37,70,119 'port':279 'previous':39 'process':137,194 'profici':308 'progress':94 'pull':254 'push':252 'raidz1':36 're':274 'readi':229 'realli':313 'reboot':104,117 'recent':21 'respect':148 'resum':127 'rsync':59,128,136,178,235 'run':159 'sata':280 'search':11 'seem':138,169 'selfhost':3 'separ':157 'server':247 'sigint':149 'sit':95 'speed':87 'ssh':155 'stall':140 'status':161 'success':173 'tell':272,289 'term':221 'thank':290 'thing':72 'think':9,206,226 'time':302 'unraid':234 'usb':45 'use':42,58,177 've':54,101,295 'viabil':222 'week':192 'workaround':168 'would':287 'yield':14 'zfs':35 'zpool':109,160,208"
+cross: false
+upVotes: 0
+downVotes: 0
+ranking: 1703862554
+visibility: "visible "
+apId: "https://lemmy.world/post/10061829"
+editedAt: DateTimeImmutable @1708796667 {#1850
date: 2024-02-24 18:44:27.0 +01:00
}
+createdAt: DateTimeImmutable @1703776154 {#2420
date: 2023-12-28 16:09:14.0 +01:00
}
}
+magazine: App\Entity\Magazine {#266}
+image: null
+parent: App\Entity\EntryComment {#4456
+user: App\Entity\User {#4469 …}
+entry: App\Entity\Entry {#2412}
+magazine: App\Entity\Magazine {#266}
+image: null
+parent: null
+root: null
+body: """
I don’t have practical experience with ZFS, but my understanding is that it uses RAM a lot… if that’s new, it might be worth checking the RAM by booting up memtest (for example) and just ruling that out.\n
\n
Maybe also worth watching the system with `nmon` or `htop` (running in another `tmux` / `screen` pane) at the beginning of the next session, then when you think it’s jammed up, see what looks different…
"""
+lang: "en"
+isAdult: false
+favouriteCount: 1
+score: 0
+lastActive: DateTime @1711282496 {#4451
date: 2024-03-24 13:14:56.0 +01:00
}
+ip: null
+tags: null
+mentions: [
"@isles@lemmy.world"
]
+children: Doctrine\ORM\PersistentCollection {#4457 …}
+nested: Doctrine\ORM\PersistentCollection {#4459 …}
+votes: Doctrine\ORM\PersistentCollection {#4461 …}
+reports: Doctrine\ORM\PersistentCollection {#4463 …}
+favourites: Doctrine\ORM\PersistentCollection {#4465 …}
+notifications: Doctrine\ORM\PersistentCollection {#4467 …}
-id: 260307
-bodyTs: "'also':42 'anoth':53 'begin':59 'boot':31 'check':27 'differ':75 'exampl':35 'experi':6 'htop':50 'jam':70 'look':74 'lot':18 'mayb':41 'memtest':33 'might':24 'new':22 'next':62 'nmon':48 'pane':56 'practic':5 'ram':16,29 'rule':38 'run':51 'screen':55 'see':72 'session':63 'system':46 'think':67 'tmux':54 'understand':11 'use':15 'watch':44 'worth':26,43 'zfs':8"
+ranking: 0
+commentCount: 0
+upVotes: 0
+downVotes: 0
+visibility: "visible "
+apId: "https://feddit.uk/comment/5443383"
+editedAt: null
+createdAt: DateTimeImmutable @1703794455 {#4452
date: 2023-12-28 21:14:15.0 +01:00
}
}
+root: App\Entity\EntryComment {#4456}
+body: """
Awesome, thanks for giving some clues. It’s a new build, but I didn’t focus hugely on RAM, I think it’s only 32GB. I’ll try this out.\n
\n
Edit: I did some reading about L2ARC, so pending some of these tests, I’m planning to get up to 64gb ram and then extend with an l2arc SSD, assuming no other hardware errors.
"""
+lang: "en"
+isAdult: false
+favouriteCount: 0
+score: 0
+lastActive: DateTime @1703794922 {#4568
date: 2023-12-28 21:22:02.0 +01:00
}
+ip: null
+tags: null
+mentions: [
"@isles@lemmy.world"
"@Cyber@feddit.uk"
"@isles@lemmy.world"
"@Cyber@feddit.uk"
]
+children: Doctrine\ORM\PersistentCollection {#4572 …}
+nested: Doctrine\ORM\PersistentCollection {#4574 …}
+votes: Doctrine\ORM\PersistentCollection {#4576 …}
+reports: Doctrine\ORM\PersistentCollection {#4578 …}
+favourites: Doctrine\ORM\PersistentCollection {#4580 …}
+notifications: Doctrine\ORM\PersistentCollection {#4582 …}
-id: 260330
-bodyTs: "'32gb':25 '64gb':51 'assum':60 'awesom':1 'build':11 'clue':6 'didn':14 'edit':31 'error':64 'extend':55 'focus':16 'get':48 'give':4 'hardwar':63 'huge':17 'l2arc':37,58 'll':27 'm':45 'new':10 'pend':39 'plan':46 'ram':19,52 'read':35 'ssd':59 'test':43 'thank':2 'think':21 'tri':28"
+ranking: 0
+commentCount: 0
+upVotes: 0
+downVotes: 0
+visibility: "visible "
+apId: "https://lemmy.world/comment/6281743"
+editedAt: DateTimeImmutable @1708816354 {#4569
date: 2024-02-25 00:12:34.0 +01:00
}
+createdAt: DateTimeImmutable @1703794922 {#4570
date: 2023-12-28 21:22:02.0 +01:00
}
} |
|
Show voter details
|
59 |
DENIED
|
edit
|
App\Entity\EntryComment {#4571
+user: Proxies\__CG__\App\Entity\User {#1970 …}
+entry: App\Entity\Entry {#2412
+user: Proxies\__CG__\App\Entity\User {#1970 …}
+magazine: App\Entity\Magazine {#266
+icon: Proxies\__CG__\App\Entity\Image {#247 …}
+name: "selfhosted@lemmy.world"
+title: "selfhosted"
+description: """
A place to share alternatives to popular online services that can be self-hosted without giving up privacy or locking you into a service you don’t control.\n
\n
Rules:\n
\n
- Be civil: we’re here to support and learn from one another. Insults won’t be tolerated. Flame wars are frowned upon.\n
- No spam posting.\n
- Don’t duplicate the full text of your blog or github here. Just post the link for folks to click.\n
- Submission headline should match the article title (don’t cherry-pick information from the title to fit your agenda).\n
- No trolling.\n
\n
Resources:\n
\n
- [awesome-selfhosted software](https://github.com/awesome-selfhosted/awesome-selfhosted)\n
- [awesome-sysadmin](https://github.com/awesome-foss/awesome-sysadmin) resources\n
- [Self-Hosted Podcast from Jupiter Broadcasting](https://selfhosted.show)\n
\n
> Any issues on the community? Report it using the report flag.\n
\n
> Questions? DM the mods!
"""
+rules: null
+subscriptionsCount: 1
+entryCount: 222
+entryCommentCount: 3916
+postCount: 0
+postCommentCount: 0
+isAdult: false
+customCss: null
+lastActive: DateTime @1729582735 {#276
date: 2024-10-22 09:38:55.0 +02:00
}
+markedForDeletionAt: null
+tags: null
+moderators: Doctrine\ORM\PersistentCollection {#238 …}
+ownershipRequests: Doctrine\ORM\PersistentCollection {#234 …}
+moderatorRequests: Doctrine\ORM\PersistentCollection {#223 …}
+entries: Doctrine\ORM\PersistentCollection {#181 …}
+posts: Doctrine\ORM\PersistentCollection {#139 …}
+subscriptions: Doctrine\ORM\PersistentCollection {#201 …}
+bans: Doctrine\ORM\PersistentCollection {#118 …}
+reports: Doctrine\ORM\PersistentCollection {#104 …}
+badges: Doctrine\ORM\PersistentCollection {#82 …}
+logs: Doctrine\ORM\PersistentCollection {#72 …}
+awards: Doctrine\ORM\PersistentCollection {#61 …}
+categories: Doctrine\ORM\PersistentCollection {#1820 …}
-id: 120
+apId: "selfhosted@lemmy.world"
+apProfileId: "https://lemmy.world/c/selfhosted"
+apPublicUrl: "https://lemmy.world/c/selfhosted"
+apFollowersUrl: "https://lemmy.world/c/selfhosted/followers"
+apInboxUrl: "https://lemmy.world/inbox"
+apDomain: "lemmy.world"
+apPreferredUsername: "selfhosted"
+apDiscoverable: true
+apManuallyApprovesFollowers: null
+privateKey: null
+publicKey: null
+apFetchedAt: DateTime @1703473826 {#270
date: 2023-12-25 04:10:26.0 +01:00
}
+apDeletedAt: null
+apTimeoutAt: null
+visibility: "visible "
+createdAt: DateTimeImmutable @1703473826 {#272
date: 2023-12-25 04:10:26.0 +01:00
}
}
+image: null
+domain: Proxies\__CG__\App\Entity\Domain {#1915 …}
+slug: "Question-ZFS-and-rsync"
+title: "Question - ZFS and rsync"
+url: null
+body: """
Hey fellow Selfhosters! I need some help, I think, and searching isn’t yielding what I’m hoping for.\n
\n
I recently built a new NAS for my network with 4x 18TB drives in a ZFS raidz1 pool. I previously have been using an external USB 12TB harddrive attached to a different machine.\n
\n
I’ve been attempting to use rsync to get the 12TB drive copied over to the new pool and things go great for the first 30-45 minutes. At that point, the current copy speed diminishes and 4 current files in progress sit at 100% done. Eventually, I’ve had to reboot the machine, because the zpool doesn’t appear accessible any longer. After reboot, the pool appears fine, no faults, and I can resume rsync for a while.\n
\n
EDIT: Of note, the rsync process seems to stall and I can’t get it to respect SIGINT or Ctrl+C. I can SSH in separately and running `zpool status` hangs with no output.\n
\n
While the workaround seems to be partially successful, the point of using rsync is to make it fairly hands-free and it’s been a week long process to copy the 3TB that I have now. I don’t think my zpool should be disappearing like that! Makes me nervous about the long-term viability. I don’t think I’m ready to drop down on Unraid.\n
\n
rsync is being initiated from the NAS to copy from the old server, am I better off “pushing” than “pulling”? I can’t imagine it’d make much difference.\n
\n
Could my drives be bad? How could I tell? They’re attached to a 10 port SATA card, could that be defective? How would I tell?\n
\n
Thanks for any help! I’ve dabbled in linux for a long time, but I’m far from proficient, so I don’t really know the intricacies of dmesg et al.
"""
+type: "article"
+lang: "en"
+isOc: false
+hasEmbed: false
+commentCount: 17
+favouriteCount: 36
+score: 0
+isAdult: false
+sticky: false
+lastActive: DateTime @1711558563 {#2410
date: 2024-03-27 17:56:03.0 +01:00
}
+ip: null
+adaAmount: 0
+tags: null
+mentions: null
+comments: Doctrine\ORM\PersistentCollection {#1884 …}
+votes: Doctrine\ORM\PersistentCollection {#1973 …}
+reports: Doctrine\ORM\PersistentCollection {#1959 …}
+favourites: Doctrine\ORM\PersistentCollection {#1927 …}
+notifications: Doctrine\ORM\PersistentCollection {#2442 …}
+badges: Doctrine\ORM\PersistentCollection {#2440 …}
+children: []
-id: 25403
-titleTs: "'question':1 'rsync':4 'zfs':2"
-bodyTs: "'-45':79 '10':278 '100':97 '12tb':46,63 '18tb':31 '30':78 '3tb':198 '4':90 '4x':30 'access':113 'al':320 'appear':112,120 'attach':48,275 'attempt':56 'bad':268 'better':250 'built':22 'c':152 'card':281 'copi':65,86,196,243 'could':264,270,282 'ctrl':151 'current':85,91 'd':260 'dabbl':296 'defect':285 'differ':51,263 'diminish':88 'disappear':211 'dmesg':318 'doesn':110 'done':98 'drive':32,64,266 'drop':231 'edit':132 'et':319 'eventu':99 'extern':44 'fair':183 'far':306 'fault':123 'fellow':2 'file':92 'fine':121 'first':77 'free':186 'get':61,145 'go':73 'great':74 'hand':185 'hands-fre':184 'hang':162 'harddriv':47 'help':7,293 'hey':1 'hope':18 'imagin':258 'initi':238 'intricaci':316 'isn':12 'know':314 'like':212 'linux':298 'long':193,220,301 'long-term':219 'longer':115 'm':17,228,305 'machin':52,106 'make':181,214,261 'minut':80 'much':262 'nas':25,241 'need':5 'nervous':216 'network':28 'new':24,69 'note':134 'old':246 'output':165 'partial':172 'point':83,175 'pool':37,70,119 'port':279 'previous':39 'process':137,194 'profici':308 'progress':94 'pull':254 'push':252 'raidz1':36 're':274 'readi':229 'realli':313 'reboot':104,117 'recent':21 'respect':148 'resum':127 'rsync':59,128,136,178,235 'run':159 'sata':280 'search':11 'seem':138,169 'selfhost':3 'separ':157 'server':247 'sigint':149 'sit':95 'speed':87 'ssh':155 'stall':140 'status':161 'success':173 'tell':272,289 'term':221 'thank':290 'thing':72 'think':9,206,226 'time':302 'unraid':234 'usb':45 'use':42,58,177 've':54,101,295 'viabil':222 'week':192 'workaround':168 'would':287 'yield':14 'zfs':35 'zpool':109,160,208"
+cross: false
+upVotes: 0
+downVotes: 0
+ranking: 1703862554
+visibility: "visible "
+apId: "https://lemmy.world/post/10061829"
+editedAt: DateTimeImmutable @1708796667 {#1850
date: 2024-02-24 18:44:27.0 +01:00
}
+createdAt: DateTimeImmutable @1703776154 {#2420
date: 2023-12-28 16:09:14.0 +01:00
}
}
+magazine: App\Entity\Magazine {#266}
+image: null
+parent: App\Entity\EntryComment {#4456
+user: App\Entity\User {#4469 …}
+entry: App\Entity\Entry {#2412}
+magazine: App\Entity\Magazine {#266}
+image: null
+parent: null
+root: null
+body: """
I don’t have practical experience with ZFS, but my understanding is that it uses RAM a lot… if that’s new, it might be worth checking the RAM by booting up memtest (for example) and just ruling that out.\n
\n
Maybe also worth watching the system with `nmon` or `htop` (running in another `tmux` / `screen` pane) at the beginning of the next session, then when you think it’s jammed up, see what looks different…
"""
+lang: "en"
+isAdult: false
+favouriteCount: 1
+score: 0
+lastActive: DateTime @1711282496 {#4451
date: 2024-03-24 13:14:56.0 +01:00
}
+ip: null
+tags: null
+mentions: [
"@isles@lemmy.world"
]
+children: Doctrine\ORM\PersistentCollection {#4457 …}
+nested: Doctrine\ORM\PersistentCollection {#4459 …}
+votes: Doctrine\ORM\PersistentCollection {#4461 …}
+reports: Doctrine\ORM\PersistentCollection {#4463 …}
+favourites: Doctrine\ORM\PersistentCollection {#4465 …}
+notifications: Doctrine\ORM\PersistentCollection {#4467 …}
-id: 260307
-bodyTs: "'also':42 'anoth':53 'begin':59 'boot':31 'check':27 'differ':75 'exampl':35 'experi':6 'htop':50 'jam':70 'look':74 'lot':18 'mayb':41 'memtest':33 'might':24 'new':22 'next':62 'nmon':48 'pane':56 'practic':5 'ram':16,29 'rule':38 'run':51 'screen':55 'see':72 'session':63 'system':46 'think':67 'tmux':54 'understand':11 'use':15 'watch':44 'worth':26,43 'zfs':8"
+ranking: 0
+commentCount: 0
+upVotes: 0
+downVotes: 0
+visibility: "visible "
+apId: "https://feddit.uk/comment/5443383"
+editedAt: null
+createdAt: DateTimeImmutable @1703794455 {#4452
date: 2023-12-28 21:14:15.0 +01:00
}
}
+root: App\Entity\EntryComment {#4456}
+body: """
Awesome, thanks for giving some clues. It’s a new build, but I didn’t focus hugely on RAM, I think it’s only 32GB. I’ll try this out.\n
\n
Edit: I did some reading about L2ARC, so pending some of these tests, I’m planning to get up to 64gb ram and then extend with an l2arc SSD, assuming no other hardware errors.
"""
+lang: "en"
+isAdult: false
+favouriteCount: 0
+score: 0
+lastActive: DateTime @1703794922 {#4568
date: 2023-12-28 21:22:02.0 +01:00
}
+ip: null
+tags: null
+mentions: [
"@isles@lemmy.world"
"@Cyber@feddit.uk"
"@isles@lemmy.world"
"@Cyber@feddit.uk"
]
+children: Doctrine\ORM\PersistentCollection {#4572 …}
+nested: Doctrine\ORM\PersistentCollection {#4574 …}
+votes: Doctrine\ORM\PersistentCollection {#4576 …}
+reports: Doctrine\ORM\PersistentCollection {#4578 …}
+favourites: Doctrine\ORM\PersistentCollection {#4580 …}
+notifications: Doctrine\ORM\PersistentCollection {#4582 …}
-id: 260330
-bodyTs: "'32gb':25 '64gb':51 'assum':60 'awesom':1 'build':11 'clue':6 'didn':14 'edit':31 'error':64 'extend':55 'focus':16 'get':48 'give':4 'hardwar':63 'huge':17 'l2arc':37,58 'll':27 'm':45 'new':10 'pend':39 'plan':46 'ram':19,52 'read':35 'ssd':59 'test':43 'thank':2 'think':21 'tri':28"
+ranking: 0
+commentCount: 0
+upVotes: 0
+downVotes: 0
+visibility: "visible "
+apId: "https://lemmy.world/comment/6281743"
+editedAt: DateTimeImmutable @1708816354 {#4569
date: 2024-02-25 00:12:34.0 +01:00
}
+createdAt: DateTimeImmutable @1703794922 {#4570
date: 2023-12-28 21:22:02.0 +01:00
}
} |
|
Show voter details
|
60 |
DENIED
|
moderate
|
App\Entity\EntryComment {#4571
+user: Proxies\__CG__\App\Entity\User {#1970 …}
+entry: App\Entity\Entry {#2412
+user: Proxies\__CG__\App\Entity\User {#1970 …}
+magazine: App\Entity\Magazine {#266
+icon: Proxies\__CG__\App\Entity\Image {#247 …}
+name: "selfhosted@lemmy.world"
+title: "selfhosted"
+description: """
A place to share alternatives to popular online services that can be self-hosted without giving up privacy or locking you into a service you don’t control.\n
\n
Rules:\n
\n
- Be civil: we’re here to support and learn from one another. Insults won’t be tolerated. Flame wars are frowned upon.\n
- No spam posting.\n
- Don’t duplicate the full text of your blog or github here. Just post the link for folks to click.\n
- Submission headline should match the article title (don’t cherry-pick information from the title to fit your agenda).\n
- No trolling.\n
\n
Resources:\n
\n
- [awesome-selfhosted software](https://github.com/awesome-selfhosted/awesome-selfhosted)\n
- [awesome-sysadmin](https://github.com/awesome-foss/awesome-sysadmin) resources\n
- [Self-Hosted Podcast from Jupiter Broadcasting](https://selfhosted.show)\n
\n
> Any issues on the community? Report it using the report flag.\n
\n
> Questions? DM the mods!
"""
+rules: null
+subscriptionsCount: 1
+entryCount: 222
+entryCommentCount: 3916
+postCount: 0
+postCommentCount: 0
+isAdult: false
+customCss: null
+lastActive: DateTime @1729582735 {#276
date: 2024-10-22 09:38:55.0 +02:00
}
+markedForDeletionAt: null
+tags: null
+moderators: Doctrine\ORM\PersistentCollection {#238 …}
+ownershipRequests: Doctrine\ORM\PersistentCollection {#234 …}
+moderatorRequests: Doctrine\ORM\PersistentCollection {#223 …}
+entries: Doctrine\ORM\PersistentCollection {#181 …}
+posts: Doctrine\ORM\PersistentCollection {#139 …}
+subscriptions: Doctrine\ORM\PersistentCollection {#201 …}
+bans: Doctrine\ORM\PersistentCollection {#118 …}
+reports: Doctrine\ORM\PersistentCollection {#104 …}
+badges: Doctrine\ORM\PersistentCollection {#82 …}
+logs: Doctrine\ORM\PersistentCollection {#72 …}
+awards: Doctrine\ORM\PersistentCollection {#61 …}
+categories: Doctrine\ORM\PersistentCollection {#1820 …}
-id: 120
+apId: "selfhosted@lemmy.world"
+apProfileId: "https://lemmy.world/c/selfhosted"
+apPublicUrl: "https://lemmy.world/c/selfhosted"
+apFollowersUrl: "https://lemmy.world/c/selfhosted/followers"
+apInboxUrl: "https://lemmy.world/inbox"
+apDomain: "lemmy.world"
+apPreferredUsername: "selfhosted"
+apDiscoverable: true
+apManuallyApprovesFollowers: null
+privateKey: null
+publicKey: null
+apFetchedAt: DateTime @1703473826 {#270
date: 2023-12-25 04:10:26.0 +01:00
}
+apDeletedAt: null
+apTimeoutAt: null
+visibility: "visible "
+createdAt: DateTimeImmutable @1703473826 {#272
date: 2023-12-25 04:10:26.0 +01:00
}
}
+image: null
+domain: Proxies\__CG__\App\Entity\Domain {#1915 …}
+slug: "Question-ZFS-and-rsync"
+title: "Question - ZFS and rsync"
+url: null
+body: """
Hey fellow Selfhosters! I need some help, I think, and searching isn’t yielding what I’m hoping for.\n
\n
I recently built a new NAS for my network with 4x 18TB drives in a ZFS raidz1 pool. I previously have been using an external USB 12TB harddrive attached to a different machine.\n
\n
I’ve been attempting to use rsync to get the 12TB drive copied over to the new pool and things go great for the first 30-45 minutes. At that point, the current copy speed diminishes and 4 current files in progress sit at 100% done. Eventually, I’ve had to reboot the machine, because the zpool doesn’t appear accessible any longer. After reboot, the pool appears fine, no faults, and I can resume rsync for a while.\n
\n
EDIT: Of note, the rsync process seems to stall and I can’t get it to respect SIGINT or Ctrl+C. I can SSH in separately and running `zpool status` hangs with no output.\n
\n
While the workaround seems to be partially successful, the point of using rsync is to make it fairly hands-free and it’s been a week long process to copy the 3TB that I have now. I don’t think my zpool should be disappearing like that! Makes me nervous about the long-term viability. I don’t think I’m ready to drop down on Unraid.\n
\n
rsync is being initiated from the NAS to copy from the old server, am I better off “pushing” than “pulling”? I can’t imagine it’d make much difference.\n
\n
Could my drives be bad? How could I tell? They’re attached to a 10 port SATA card, could that be defective? How would I tell?\n
\n
Thanks for any help! I’ve dabbled in linux for a long time, but I’m far from proficient, so I don’t really know the intricacies of dmesg et al.
"""
+type: "article"
+lang: "en"
+isOc: false
+hasEmbed: false
+commentCount: 17
+favouriteCount: 36
+score: 0
+isAdult: false
+sticky: false
+lastActive: DateTime @1711558563 {#2410
date: 2024-03-27 17:56:03.0 +01:00
}
+ip: null
+adaAmount: 0
+tags: null
+mentions: null
+comments: Doctrine\ORM\PersistentCollection {#1884 …}
+votes: Doctrine\ORM\PersistentCollection {#1973 …}
+reports: Doctrine\ORM\PersistentCollection {#1959 …}
+favourites: Doctrine\ORM\PersistentCollection {#1927 …}
+notifications: Doctrine\ORM\PersistentCollection {#2442 …}
+badges: Doctrine\ORM\PersistentCollection {#2440 …}
+children: []
-id: 25403
-titleTs: "'question':1 'rsync':4 'zfs':2"
-bodyTs: "'-45':79 '10':278 '100':97 '12tb':46,63 '18tb':31 '30':78 '3tb':198 '4':90 '4x':30 'access':113 'al':320 'appear':112,120 'attach':48,275 'attempt':56 'bad':268 'better':250 'built':22 'c':152 'card':281 'copi':65,86,196,243 'could':264,270,282 'ctrl':151 'current':85,91 'd':260 'dabbl':296 'defect':285 'differ':51,263 'diminish':88 'disappear':211 'dmesg':318 'doesn':110 'done':98 'drive':32,64,266 'drop':231 'edit':132 'et':319 'eventu':99 'extern':44 'fair':183 'far':306 'fault':123 'fellow':2 'file':92 'fine':121 'first':77 'free':186 'get':61,145 'go':73 'great':74 'hand':185 'hands-fre':184 'hang':162 'harddriv':47 'help':7,293 'hey':1 'hope':18 'imagin':258 'initi':238 'intricaci':316 'isn':12 'know':314 'like':212 'linux':298 'long':193,220,301 'long-term':219 'longer':115 'm':17,228,305 'machin':52,106 'make':181,214,261 'minut':80 'much':262 'nas':25,241 'need':5 'nervous':216 'network':28 'new':24,69 'note':134 'old':246 'output':165 'partial':172 'point':83,175 'pool':37,70,119 'port':279 'previous':39 'process':137,194 'profici':308 'progress':94 'pull':254 'push':252 'raidz1':36 're':274 'readi':229 'realli':313 'reboot':104,117 'recent':21 'respect':148 'resum':127 'rsync':59,128,136,178,235 'run':159 'sata':280 'search':11 'seem':138,169 'selfhost':3 'separ':157 'server':247 'sigint':149 'sit':95 'speed':87 'ssh':155 'stall':140 'status':161 'success':173 'tell':272,289 'term':221 'thank':290 'thing':72 'think':9,206,226 'time':302 'unraid':234 'usb':45 'use':42,58,177 've':54,101,295 'viabil':222 'week':192 'workaround':168 'would':287 'yield':14 'zfs':35 'zpool':109,160,208"
+cross: false
+upVotes: 0
+downVotes: 0
+ranking: 1703862554
+visibility: "visible "
+apId: "https://lemmy.world/post/10061829"
+editedAt: DateTimeImmutable @1708796667 {#1850
date: 2024-02-24 18:44:27.0 +01:00
}
+createdAt: DateTimeImmutable @1703776154 {#2420
date: 2023-12-28 16:09:14.0 +01:00
}
}
+magazine: App\Entity\Magazine {#266}
+image: null
+parent: App\Entity\EntryComment {#4456
+user: App\Entity\User {#4469 …}
+entry: App\Entity\Entry {#2412}
+magazine: App\Entity\Magazine {#266}
+image: null
+parent: null
+root: null
+body: """
I don’t have practical experience with ZFS, but my understanding is that it uses RAM a lot… if that’s new, it might be worth checking the RAM by booting up memtest (for example) and just ruling that out.\n
\n
Maybe also worth watching the system with `nmon` or `htop` (running in another `tmux` / `screen` pane) at the beginning of the next session, then when you think it’s jammed up, see what looks different…
"""
+lang: "en"
+isAdult: false
+favouriteCount: 1
+score: 0
+lastActive: DateTime @1711282496 {#4451
date: 2024-03-24 13:14:56.0 +01:00
}
+ip: null
+tags: null
+mentions: [
"@isles@lemmy.world"
]
+children: Doctrine\ORM\PersistentCollection {#4457 …}
+nested: Doctrine\ORM\PersistentCollection {#4459 …}
+votes: Doctrine\ORM\PersistentCollection {#4461 …}
+reports: Doctrine\ORM\PersistentCollection {#4463 …}
+favourites: Doctrine\ORM\PersistentCollection {#4465 …}
+notifications: Doctrine\ORM\PersistentCollection {#4467 …}
-id: 260307
-bodyTs: "'also':42 'anoth':53 'begin':59 'boot':31 'check':27 'differ':75 'exampl':35 'experi':6 'htop':50 'jam':70 'look':74 'lot':18 'mayb':41 'memtest':33 'might':24 'new':22 'next':62 'nmon':48 'pane':56 'practic':5 'ram':16,29 'rule':38 'run':51 'screen':55 'see':72 'session':63 'system':46 'think':67 'tmux':54 'understand':11 'use':15 'watch':44 'worth':26,43 'zfs':8"
+ranking: 0
+commentCount: 0
+upVotes: 0
+downVotes: 0
+visibility: "visible "
+apId: "https://feddit.uk/comment/5443383"
+editedAt: null
+createdAt: DateTimeImmutable @1703794455 {#4452
date: 2023-12-28 21:14:15.0 +01:00
}
}
+root: App\Entity\EntryComment {#4456}
+body: """
Awesome, thanks for giving some clues. It’s a new build, but I didn’t focus hugely on RAM, I think it’s only 32GB. I’ll try this out.\n
\n
Edit: I did some reading about L2ARC, so pending some of these tests, I’m planning to get up to 64gb ram and then extend with an l2arc SSD, assuming no other hardware errors.
"""
+lang: "en"
+isAdult: false
+favouriteCount: 0
+score: 0
+lastActive: DateTime @1703794922 {#4568
date: 2023-12-28 21:22:02.0 +01:00
}
+ip: null
+tags: null
+mentions: [
"@isles@lemmy.world"
"@Cyber@feddit.uk"
"@isles@lemmy.world"
"@Cyber@feddit.uk"
]
+children: Doctrine\ORM\PersistentCollection {#4572 …}
+nested: Doctrine\ORM\PersistentCollection {#4574 …}
+votes: Doctrine\ORM\PersistentCollection {#4576 …}
+reports: Doctrine\ORM\PersistentCollection {#4578 …}
+favourites: Doctrine\ORM\PersistentCollection {#4580 …}
+notifications: Doctrine\ORM\PersistentCollection {#4582 …}
-id: 260330
-bodyTs: "'32gb':25 '64gb':51 'assum':60 'awesom':1 'build':11 'clue':6 'didn':14 'edit':31 'error':64 'extend':55 'focus':16 'get':48 'give':4 'hardwar':63 'huge':17 'l2arc':37,58 'll':27 'm':45 'new':10 'pend':39 'plan':46 'ram':19,52 'read':35 'ssd':59 'test':43 'thank':2 'think':21 'tri':28"
+ranking: 0
+commentCount: 0
+upVotes: 0
+downVotes: 0
+visibility: "visible "
+apId: "https://lemmy.world/comment/6281743"
+editedAt: DateTimeImmutable @1708816354 {#4569
date: 2024-02-25 00:12:34.0 +01:00
}
+createdAt: DateTimeImmutable @1703794922 {#4570
date: 2023-12-28 21:22:02.0 +01:00
}
} |
|
Show voter details
|
61 |
DENIED
|
ROLE_USER
|
null |
|
Show voter details
|
62 |
DENIED
|
moderate
|
App\Entity\EntryComment {#4654
+user: Proxies\__CG__\App\Entity\User {#4655 …}
+entry: App\Entity\Entry {#2412
+user: Proxies\__CG__\App\Entity\User {#1970 …}
+magazine: App\Entity\Magazine {#266
+icon: Proxies\__CG__\App\Entity\Image {#247 …}
+name: "selfhosted@lemmy.world"
+title: "selfhosted"
+description: """
A place to share alternatives to popular online services that can be self-hosted without giving up privacy or locking you into a service you don’t control.\n
\n
Rules:\n
\n
- Be civil: we’re here to support and learn from one another. Insults won’t be tolerated. Flame wars are frowned upon.\n
- No spam posting.\n
- Don’t duplicate the full text of your blog or github here. Just post the link for folks to click.\n
- Submission headline should match the article title (don’t cherry-pick information from the title to fit your agenda).\n
- No trolling.\n
\n
Resources:\n
\n
- [awesome-selfhosted software](https://github.com/awesome-selfhosted/awesome-selfhosted)\n
- [awesome-sysadmin](https://github.com/awesome-foss/awesome-sysadmin) resources\n
- [Self-Hosted Podcast from Jupiter Broadcasting](https://selfhosted.show)\n
\n
> Any issues on the community? Report it using the report flag.\n
\n
> Questions? DM the mods!
"""
+rules: null
+subscriptionsCount: 1
+entryCount: 222
+entryCommentCount: 3916
+postCount: 0
+postCommentCount: 0
+isAdult: false
+customCss: null
+lastActive: DateTime @1729582735 {#276
date: 2024-10-22 09:38:55.0 +02:00
}
+markedForDeletionAt: null
+tags: null
+moderators: Doctrine\ORM\PersistentCollection {#238 …}
+ownershipRequests: Doctrine\ORM\PersistentCollection {#234 …}
+moderatorRequests: Doctrine\ORM\PersistentCollection {#223 …}
+entries: Doctrine\ORM\PersistentCollection {#181 …}
+posts: Doctrine\ORM\PersistentCollection {#139 …}
+subscriptions: Doctrine\ORM\PersistentCollection {#201 …}
+bans: Doctrine\ORM\PersistentCollection {#118 …}
+reports: Doctrine\ORM\PersistentCollection {#104 …}
+badges: Doctrine\ORM\PersistentCollection {#82 …}
+logs: Doctrine\ORM\PersistentCollection {#72 …}
+awards: Doctrine\ORM\PersistentCollection {#61 …}
+categories: Doctrine\ORM\PersistentCollection {#1820 …}
-id: 120
+apId: "selfhosted@lemmy.world"
+apProfileId: "https://lemmy.world/c/selfhosted"
+apPublicUrl: "https://lemmy.world/c/selfhosted"
+apFollowersUrl: "https://lemmy.world/c/selfhosted/followers"
+apInboxUrl: "https://lemmy.world/inbox"
+apDomain: "lemmy.world"
+apPreferredUsername: "selfhosted"
+apDiscoverable: true
+apManuallyApprovesFollowers: null
+privateKey: null
+publicKey: null
+apFetchedAt: DateTime @1703473826 {#270
date: 2023-12-25 04:10:26.0 +01:00
}
+apDeletedAt: null
+apTimeoutAt: null
+visibility: "visible "
+createdAt: DateTimeImmutable @1703473826 {#272
date: 2023-12-25 04:10:26.0 +01:00
}
}
+image: null
+domain: Proxies\__CG__\App\Entity\Domain {#1915 …}
+slug: "Question-ZFS-and-rsync"
+title: "Question - ZFS and rsync"
+url: null
+body: """
Hey fellow Selfhosters! I need some help, I think, and searching isn’t yielding what I’m hoping for.\n
\n
I recently built a new NAS for my network with 4x 18TB drives in a ZFS raidz1 pool. I previously have been using an external USB 12TB harddrive attached to a different machine.\n
\n
I’ve been attempting to use rsync to get the 12TB drive copied over to the new pool and things go great for the first 30-45 minutes. At that point, the current copy speed diminishes and 4 current files in progress sit at 100% done. Eventually, I’ve had to reboot the machine, because the zpool doesn’t appear accessible any longer. After reboot, the pool appears fine, no faults, and I can resume rsync for a while.\n
\n
EDIT: Of note, the rsync process seems to stall and I can’t get it to respect SIGINT or Ctrl+C. I can SSH in separately and running `zpool status` hangs with no output.\n
\n
While the workaround seems to be partially successful, the point of using rsync is to make it fairly hands-free and it’s been a week long process to copy the 3TB that I have now. I don’t think my zpool should be disappearing like that! Makes me nervous about the long-term viability. I don’t think I’m ready to drop down on Unraid.\n
\n
rsync is being initiated from the NAS to copy from the old server, am I better off “pushing” than “pulling”? I can’t imagine it’d make much difference.\n
\n
Could my drives be bad? How could I tell? They’re attached to a 10 port SATA card, could that be defective? How would I tell?\n
\n
Thanks for any help! I’ve dabbled in linux for a long time, but I’m far from proficient, so I don’t really know the intricacies of dmesg et al.
"""
+type: "article"
+lang: "en"
+isOc: false
+hasEmbed: false
+commentCount: 17
+favouriteCount: 36
+score: 0
+isAdult: false
+sticky: false
+lastActive: DateTime @1711558563 {#2410
date: 2024-03-27 17:56:03.0 +01:00
}
+ip: null
+adaAmount: 0
+tags: null
+mentions: null
+comments: Doctrine\ORM\PersistentCollection {#1884 …}
+votes: Doctrine\ORM\PersistentCollection {#1973 …}
+reports: Doctrine\ORM\PersistentCollection {#1959 …}
+favourites: Doctrine\ORM\PersistentCollection {#1927 …}
+notifications: Doctrine\ORM\PersistentCollection {#2442 …}
+badges: Doctrine\ORM\PersistentCollection {#2440 …}
+children: []
-id: 25403
-titleTs: "'question':1 'rsync':4 'zfs':2"
-bodyTs: "'-45':79 '10':278 '100':97 '12tb':46,63 '18tb':31 '30':78 '3tb':198 '4':90 '4x':30 'access':113 'al':320 'appear':112,120 'attach':48,275 'attempt':56 'bad':268 'better':250 'built':22 'c':152 'card':281 'copi':65,86,196,243 'could':264,270,282 'ctrl':151 'current':85,91 'd':260 'dabbl':296 'defect':285 'differ':51,263 'diminish':88 'disappear':211 'dmesg':318 'doesn':110 'done':98 'drive':32,64,266 'drop':231 'edit':132 'et':319 'eventu':99 'extern':44 'fair':183 'far':306 'fault':123 'fellow':2 'file':92 'fine':121 'first':77 'free':186 'get':61,145 'go':73 'great':74 'hand':185 'hands-fre':184 'hang':162 'harddriv':47 'help':7,293 'hey':1 'hope':18 'imagin':258 'initi':238 'intricaci':316 'isn':12 'know':314 'like':212 'linux':298 'long':193,220,301 'long-term':219 'longer':115 'm':17,228,305 'machin':52,106 'make':181,214,261 'minut':80 'much':262 'nas':25,241 'need':5 'nervous':216 'network':28 'new':24,69 'note':134 'old':246 'output':165 'partial':172 'point':83,175 'pool':37,70,119 'port':279 'previous':39 'process':137,194 'profici':308 'progress':94 'pull':254 'push':252 'raidz1':36 're':274 'readi':229 'realli':313 'reboot':104,117 'recent':21 'respect':148 'resum':127 'rsync':59,128,136,178,235 'run':159 'sata':280 'search':11 'seem':138,169 'selfhost':3 'separ':157 'server':247 'sigint':149 'sit':95 'speed':87 'ssh':155 'stall':140 'status':161 'success':173 'tell':272,289 'term':221 'thank':290 'thing':72 'think':9,206,226 'time':302 'unraid':234 'usb':45 'use':42,58,177 've':54,101,295 'viabil':222 'week':192 'workaround':168 'would':287 'yield':14 'zfs':35 'zpool':109,160,208"
+cross: false
+upVotes: 0
+downVotes: 0
+ranking: 1703862554
+visibility: "visible "
+apId: "https://lemmy.world/post/10061829"
+editedAt: DateTimeImmutable @1708796667 {#1850
date: 2024-02-24 18:44:27.0 +01:00
}
+createdAt: DateTimeImmutable @1703776154 {#2420
date: 2023-12-28 16:09:14.0 +01:00
}
}
+magazine: App\Entity\Magazine {#266}
+image: null
+parent: App\Entity\EntryComment {#4571
+user: Proxies\__CG__\App\Entity\User {#1970 …}
+entry: App\Entity\Entry {#2412}
+magazine: App\Entity\Magazine {#266}
+image: null
+parent: App\Entity\EntryComment {#4456
+user: App\Entity\User {#4469 …}
+entry: App\Entity\Entry {#2412}
+magazine: App\Entity\Magazine {#266}
+image: null
+parent: null
+root: null
+body: """
I don’t have practical experience with ZFS, but my understanding is that it uses RAM a lot… if that’s new, it might be worth checking the RAM by booting up memtest (for example) and just ruling that out.\n
\n
Maybe also worth watching the system with `nmon` or `htop` (running in another `tmux` / `screen` pane) at the beginning of the next session, then when you think it’s jammed up, see what looks different…
"""
+lang: "en"
+isAdult: false
+favouriteCount: 1
+score: 0
+lastActive: DateTime @1711282496 {#4451
date: 2024-03-24 13:14:56.0 +01:00
}
+ip: null
+tags: null
+mentions: [
"@isles@lemmy.world"
]
+children: Doctrine\ORM\PersistentCollection {#4457 …}
+nested: Doctrine\ORM\PersistentCollection {#4459 …}
+votes: Doctrine\ORM\PersistentCollection {#4461 …}
+reports: Doctrine\ORM\PersistentCollection {#4463 …}
+favourites: Doctrine\ORM\PersistentCollection {#4465 …}
+notifications: Doctrine\ORM\PersistentCollection {#4467 …}
-id: 260307
-bodyTs: "'also':42 'anoth':53 'begin':59 'boot':31 'check':27 'differ':75 'exampl':35 'experi':6 'htop':50 'jam':70 'look':74 'lot':18 'mayb':41 'memtest':33 'might':24 'new':22 'next':62 'nmon':48 'pane':56 'practic':5 'ram':16,29 'rule':38 'run':51 'screen':55 'see':72 'session':63 'system':46 'think':67 'tmux':54 'understand':11 'use':15 'watch':44 'worth':26,43 'zfs':8"
+ranking: 0
+commentCount: 0
+upVotes: 0
+downVotes: 0
+visibility: "visible "
+apId: "https://feddit.uk/comment/5443383"
+editedAt: null
+createdAt: DateTimeImmutable @1703794455 {#4452
date: 2023-12-28 21:14:15.0 +01:00
}
}
+root: App\Entity\EntryComment {#4456}
+body: """
Awesome, thanks for giving some clues. It’s a new build, but I didn’t focus hugely on RAM, I think it’s only 32GB. I’ll try this out.\n
\n
Edit: I did some reading about L2ARC, so pending some of these tests, I’m planning to get up to 64gb ram and then extend with an l2arc SSD, assuming no other hardware errors.
"""
+lang: "en"
+isAdult: false
+favouriteCount: 0
+score: 0
+lastActive: DateTime @1703794922 {#4568
date: 2023-12-28 21:22:02.0 +01:00
}
+ip: null
+tags: null
+mentions: [
"@isles@lemmy.world"
"@Cyber@feddit.uk"
"@isles@lemmy.world"
"@Cyber@feddit.uk"
]
+children: Doctrine\ORM\PersistentCollection {#4572 …}
+nested: Doctrine\ORM\PersistentCollection {#4574 …}
+votes: Doctrine\ORM\PersistentCollection {#4576 …}
+reports: Doctrine\ORM\PersistentCollection {#4578 …}
+favourites: Doctrine\ORM\PersistentCollection {#4580 …}
+notifications: Doctrine\ORM\PersistentCollection {#4582 …}
-id: 260330
-bodyTs: "'32gb':25 '64gb':51 'assum':60 'awesom':1 'build':11 'clue':6 'didn':14 'edit':31 'error':64 'extend':55 'focus':16 'get':48 'give':4 'hardwar':63 'huge':17 'l2arc':37,58 'll':27 'm':45 'new':10 'pend':39 'plan':46 'ram':19,52 'read':35 'ssd':59 'test':43 'thank':2 'think':21 'tri':28"
+ranking: 0
+commentCount: 0
+upVotes: 0
+downVotes: 0
+visibility: "visible "
+apId: "https://lemmy.world/comment/6281743"
+editedAt: DateTimeImmutable @1708816354 {#4569
date: 2024-02-25 00:12:34.0 +01:00
}
+createdAt: DateTimeImmutable @1703794922 {#4570
date: 2023-12-28 21:22:02.0 +01:00
}
}
+root: App\Entity\EntryComment {#4456}
+body: """
Based on this [thread](https://serverfault.com/questions/569354/freenas-do-i-need-1gb-per-tb-of-usable-storage-or-1gb-of-memory-per-tb-of-phys) it’s the deduplication that requires a lot of RAM.\n
\n
See also: [wiki.freebsd.org/ZFSTuningGuide](https://wiki.freebsd.org/ZFSTuningGuide)\n
\n
Edit: from my understand the pool shouldn’t become inaccessible tho and only get slow. So there might be another issue.\n
\n
Edit2: here’s a guide to check whether your system is limited by zfs’ memory consumption: [github.com/openzfs/zfs/issues/10251](https://github.com/openzfs/zfs/issues/10251)
"""
+lang: "en"
+isAdult: false
+favouriteCount: 2
+score: 0
+lastActive: DateTime @1703800878 {#4651
date: 2023-12-28 23:01:18.0 +01:00
}
+ip: null
+tags: null
+mentions: [
"@isles@lemmy.world"
"@Cyber@feddit.uk"
"@isles@lemmy.world"
"@Cyber@feddit.uk"
"@isles@lemmy.world"
"@Cyber@feddit.uk"
]
+children: Doctrine\ORM\PersistentCollection {#4656 …}
+nested: Doctrine\ORM\PersistentCollection {#4658 …}
+votes: Doctrine\ORM\PersistentCollection {#4660 …}
+reports: Doctrine\ORM\PersistentCollection {#4662 …}
+favourites: Doctrine\ORM\PersistentCollection {#4664 …}
+notifications: Doctrine\ORM\PersistentCollection {#4666 …}
-id: 260559
-bodyTs: "'/openzfs/zfs/issues/10251](https://github.com/openzfs/zfs/issues/10251)':62 '/questions/569354/freenas-do-i-need-1gb-per-tb-of-usable-storage-or-1gb-of-memory-per-tb-of-phys)':7 '/zfstuningguide](https://wiki.freebsd.org/zfstuningguide)':22 'also':19 'anoth':42 'base':1 'becom':31 'check':50 'consumpt':59 'dedupl':11 'edit':23 'edit2':44 'get':36 'github.com':61 'github.com/openzfs/zfs/issues/10251](https://github.com/openzfs/zfs/issues/10251)':60 'guid':48 'inaccess':32 'issu':43 'limit':55 'lot':15 'memori':58 'might':40 'pool':28 'ram':17 'requir':13 'see':18 'serverfault.com':6 'serverfault.com/questions/569354/freenas-do-i-need-1gb-per-tb-of-usable-storage-or-1gb-of-memory-per-tb-of-phys)':5 'shouldn':29 'slow':37 'system':53 'tho':33 'thread':4 'understand':26 'whether':51 'wiki.freebsd.org':21 'wiki.freebsd.org/zfstuningguide](https://wiki.freebsd.org/zfstuningguide)':20 'zfs':57"
+ranking: 0
+commentCount: 0
+upVotes: 0
+downVotes: 0
+visibility: "visible "
+apId: "https://sh.itjust.works/comment/6902309"
+editedAt: DateTimeImmutable @1708835372 {#4652
date: 2024-02-25 05:29:32.0 +01:00
}
+createdAt: DateTimeImmutable @1703800878 {#4653
date: 2023-12-28 23:01:18.0 +01:00
}
} |
|
Show voter details
|
63 |
DENIED
|
edit
|
App\Entity\EntryComment {#4654
+user: Proxies\__CG__\App\Entity\User {#4655 …}
+entry: App\Entity\Entry {#2412
+user: Proxies\__CG__\App\Entity\User {#1970 …}
+magazine: App\Entity\Magazine {#266
+icon: Proxies\__CG__\App\Entity\Image {#247 …}
+name: "selfhosted@lemmy.world"
+title: "selfhosted"
+description: """
A place to share alternatives to popular online services that can be self-hosted without giving up privacy or locking you into a service you don’t control.\n
\n
Rules:\n
\n
- Be civil: we’re here to support and learn from one another. Insults won’t be tolerated. Flame wars are frowned upon.\n
- No spam posting.\n
- Don’t duplicate the full text of your blog or github here. Just post the link for folks to click.\n
- Submission headline should match the article title (don’t cherry-pick information from the title to fit your agenda).\n
- No trolling.\n
\n
Resources:\n
\n
- [awesome-selfhosted software](https://github.com/awesome-selfhosted/awesome-selfhosted)\n
- [awesome-sysadmin](https://github.com/awesome-foss/awesome-sysadmin) resources\n
- [Self-Hosted Podcast from Jupiter Broadcasting](https://selfhosted.show)\n
\n
> Any issues on the community? Report it using the report flag.\n
\n
> Questions? DM the mods!
"""
+rules: null
+subscriptionsCount: 1
+entryCount: 222
+entryCommentCount: 3916
+postCount: 0
+postCommentCount: 0
+isAdult: false
+customCss: null
+lastActive: DateTime @1729582735 {#276
date: 2024-10-22 09:38:55.0 +02:00
}
+markedForDeletionAt: null
+tags: null
+moderators: Doctrine\ORM\PersistentCollection {#238 …}
+ownershipRequests: Doctrine\ORM\PersistentCollection {#234 …}
+moderatorRequests: Doctrine\ORM\PersistentCollection {#223 …}
+entries: Doctrine\ORM\PersistentCollection {#181 …}
+posts: Doctrine\ORM\PersistentCollection {#139 …}
+subscriptions: Doctrine\ORM\PersistentCollection {#201 …}
+bans: Doctrine\ORM\PersistentCollection {#118 …}
+reports: Doctrine\ORM\PersistentCollection {#104 …}
+badges: Doctrine\ORM\PersistentCollection {#82 …}
+logs: Doctrine\ORM\PersistentCollection {#72 …}
+awards: Doctrine\ORM\PersistentCollection {#61 …}
+categories: Doctrine\ORM\PersistentCollection {#1820 …}
-id: 120
+apId: "selfhosted@lemmy.world"
+apProfileId: "https://lemmy.world/c/selfhosted"
+apPublicUrl: "https://lemmy.world/c/selfhosted"
+apFollowersUrl: "https://lemmy.world/c/selfhosted/followers"
+apInboxUrl: "https://lemmy.world/inbox"
+apDomain: "lemmy.world"
+apPreferredUsername: "selfhosted"
+apDiscoverable: true
+apManuallyApprovesFollowers: null
+privateKey: null
+publicKey: null
+apFetchedAt: DateTime @1703473826 {#270
date: 2023-12-25 04:10:26.0 +01:00
}
+apDeletedAt: null
+apTimeoutAt: null
+visibility: "visible "
+createdAt: DateTimeImmutable @1703473826 {#272
date: 2023-12-25 04:10:26.0 +01:00
}
}
+image: null
+domain: Proxies\__CG__\App\Entity\Domain {#1915 …}
+slug: "Question-ZFS-and-rsync"
+title: "Question - ZFS and rsync"
+url: null
+body: """
Hey fellow Selfhosters! I need some help, I think, and searching isn’t yielding what I’m hoping for.\n
\n
I recently built a new NAS for my network with 4x 18TB drives in a ZFS raidz1 pool. I previously have been using an external USB 12TB harddrive attached to a different machine.\n
\n
I’ve been attempting to use rsync to get the 12TB drive copied over to the new pool and things go great for the first 30-45 minutes. At that point, the current copy speed diminishes and 4 current files in progress sit at 100% done. Eventually, I’ve had to reboot the machine, because the zpool doesn’t appear accessible any longer. After reboot, the pool appears fine, no faults, and I can resume rsync for a while.\n
\n
EDIT: Of note, the rsync process seems to stall and I can’t get it to respect SIGINT or Ctrl+C. I can SSH in separately and running `zpool status` hangs with no output.\n
\n
While the workaround seems to be partially successful, the point of using rsync is to make it fairly hands-free and it’s been a week long process to copy the 3TB that I have now. I don’t think my zpool should be disappearing like that! Makes me nervous about the long-term viability. I don’t think I’m ready to drop down on Unraid.\n
\n
rsync is being initiated from the NAS to copy from the old server, am I better off “pushing” than “pulling”? I can’t imagine it’d make much difference.\n
\n
Could my drives be bad? How could I tell? They’re attached to a 10 port SATA card, could that be defective? How would I tell?\n
\n
Thanks for any help! I’ve dabbled in linux for a long time, but I’m far from proficient, so I don’t really know the intricacies of dmesg et al.
"""
+type: "article"
+lang: "en"
+isOc: false
+hasEmbed: false
+commentCount: 17
+favouriteCount: 36
+score: 0
+isAdult: false
+sticky: false
+lastActive: DateTime @1711558563 {#2410
date: 2024-03-27 17:56:03.0 +01:00
}
+ip: null
+adaAmount: 0
+tags: null
+mentions: null
+comments: Doctrine\ORM\PersistentCollection {#1884 …}
+votes: Doctrine\ORM\PersistentCollection {#1973 …}
+reports: Doctrine\ORM\PersistentCollection {#1959 …}
+favourites: Doctrine\ORM\PersistentCollection {#1927 …}
+notifications: Doctrine\ORM\PersistentCollection {#2442 …}
+badges: Doctrine\ORM\PersistentCollection {#2440 …}
+children: []
-id: 25403
-titleTs: "'question':1 'rsync':4 'zfs':2"
-bodyTs: "'-45':79 '10':278 '100':97 '12tb':46,63 '18tb':31 '30':78 '3tb':198 '4':90 '4x':30 'access':113 'al':320 'appear':112,120 'attach':48,275 'attempt':56 'bad':268 'better':250 'built':22 'c':152 'card':281 'copi':65,86,196,243 'could':264,270,282 'ctrl':151 'current':85,91 'd':260 'dabbl':296 'defect':285 'differ':51,263 'diminish':88 'disappear':211 'dmesg':318 'doesn':110 'done':98 'drive':32,64,266 'drop':231 'edit':132 'et':319 'eventu':99 'extern':44 'fair':183 'far':306 'fault':123 'fellow':2 'file':92 'fine':121 'first':77 'free':186 'get':61,145 'go':73 'great':74 'hand':185 'hands-fre':184 'hang':162 'harddriv':47 'help':7,293 'hey':1 'hope':18 'imagin':258 'initi':238 'intricaci':316 'isn':12 'know':314 'like':212 'linux':298 'long':193,220,301 'long-term':219 'longer':115 'm':17,228,305 'machin':52,106 'make':181,214,261 'minut':80 'much':262 'nas':25,241 'need':5 'nervous':216 'network':28 'new':24,69 'note':134 'old':246 'output':165 'partial':172 'point':83,175 'pool':37,70,119 'port':279 'previous':39 'process':137,194 'profici':308 'progress':94 'pull':254 'push':252 'raidz1':36 're':274 'readi':229 'realli':313 'reboot':104,117 'recent':21 'respect':148 'resum':127 'rsync':59,128,136,178,235 'run':159 'sata':280 'search':11 'seem':138,169 'selfhost':3 'separ':157 'server':247 'sigint':149 'sit':95 'speed':87 'ssh':155 'stall':140 'status':161 'success':173 'tell':272,289 'term':221 'thank':290 'thing':72 'think':9,206,226 'time':302 'unraid':234 'usb':45 'use':42,58,177 've':54,101,295 'viabil':222 'week':192 'workaround':168 'would':287 'yield':14 'zfs':35 'zpool':109,160,208"
+cross: false
+upVotes: 0
+downVotes: 0
+ranking: 1703862554
+visibility: "visible "
+apId: "https://lemmy.world/post/10061829"
+editedAt: DateTimeImmutable @1708796667 {#1850
date: 2024-02-24 18:44:27.0 +01:00
}
+createdAt: DateTimeImmutable @1703776154 {#2420
date: 2023-12-28 16:09:14.0 +01:00
}
}
+magazine: App\Entity\Magazine {#266}
+image: null
+parent: App\Entity\EntryComment {#4571
+user: Proxies\__CG__\App\Entity\User {#1970 …}
+entry: App\Entity\Entry {#2412}
+magazine: App\Entity\Magazine {#266}
+image: null
+parent: App\Entity\EntryComment {#4456
+user: App\Entity\User {#4469 …}
+entry: App\Entity\Entry {#2412}
+magazine: App\Entity\Magazine {#266}
+image: null
+parent: null
+root: null
+body: """
I don’t have practical experience with ZFS, but my understanding is that it uses RAM a lot… if that’s new, it might be worth checking the RAM by booting up memtest (for example) and just ruling that out.\n
\n
Maybe also worth watching the system with `nmon` or `htop` (running in another `tmux` / `screen` pane) at the beginning of the next session, then when you think it’s jammed up, see what looks different…
"""
+lang: "en"
+isAdult: false
+favouriteCount: 1
+score: 0
+lastActive: DateTime @1711282496 {#4451
date: 2024-03-24 13:14:56.0 +01:00
}
+ip: null
+tags: null
+mentions: [
"@isles@lemmy.world"
]
+children: Doctrine\ORM\PersistentCollection {#4457 …}
+nested: Doctrine\ORM\PersistentCollection {#4459 …}
+votes: Doctrine\ORM\PersistentCollection {#4461 …}
+reports: Doctrine\ORM\PersistentCollection {#4463 …}
+favourites: Doctrine\ORM\PersistentCollection {#4465 …}
+notifications: Doctrine\ORM\PersistentCollection {#4467 …}
-id: 260307
-bodyTs: "'also':42 'anoth':53 'begin':59 'boot':31 'check':27 'differ':75 'exampl':35 'experi':6 'htop':50 'jam':70 'look':74 'lot':18 'mayb':41 'memtest':33 'might':24 'new':22 'next':62 'nmon':48 'pane':56 'practic':5 'ram':16,29 'rule':38 'run':51 'screen':55 'see':72 'session':63 'system':46 'think':67 'tmux':54 'understand':11 'use':15 'watch':44 'worth':26,43 'zfs':8"
+ranking: 0
+commentCount: 0
+upVotes: 0
+downVotes: 0
+visibility: "visible "
+apId: "https://feddit.uk/comment/5443383"
+editedAt: null
+createdAt: DateTimeImmutable @1703794455 {#4452
date: 2023-12-28 21:14:15.0 +01:00
}
}
+root: App\Entity\EntryComment {#4456}
+body: """
Awesome, thanks for giving some clues. It’s a new build, but I didn’t focus hugely on RAM, I think it’s only 32GB. I’ll try this out.\n
\n
Edit: I did some reading about L2ARC, so pending some of these tests, I’m planning to get up to 64gb ram and then extend with an l2arc SSD, assuming no other hardware errors.
"""
+lang: "en"
+isAdult: false
+favouriteCount: 0
+score: 0
+lastActive: DateTime @1703794922 {#4568
date: 2023-12-28 21:22:02.0 +01:00
}
+ip: null
+tags: null
+mentions: [
"@isles@lemmy.world"
"@Cyber@feddit.uk"
"@isles@lemmy.world"
"@Cyber@feddit.uk"
]
+children: Doctrine\ORM\PersistentCollection {#4572 …}
+nested: Doctrine\ORM\PersistentCollection {#4574 …}
+votes: Doctrine\ORM\PersistentCollection {#4576 …}
+reports: Doctrine\ORM\PersistentCollection {#4578 …}
+favourites: Doctrine\ORM\PersistentCollection {#4580 …}
+notifications: Doctrine\ORM\PersistentCollection {#4582 …}
-id: 260330
-bodyTs: "'32gb':25 '64gb':51 'assum':60 'awesom':1 'build':11 'clue':6 'didn':14 'edit':31 'error':64 'extend':55 'focus':16 'get':48 'give':4 'hardwar':63 'huge':17 'l2arc':37,58 'll':27 'm':45 'new':10 'pend':39 'plan':46 'ram':19,52 'read':35 'ssd':59 'test':43 'thank':2 'think':21 'tri':28"
+ranking: 0
+commentCount: 0
+upVotes: 0
+downVotes: 0
+visibility: "visible "
+apId: "https://lemmy.world/comment/6281743"
+editedAt: DateTimeImmutable @1708816354 {#4569
date: 2024-02-25 00:12:34.0 +01:00
}
+createdAt: DateTimeImmutable @1703794922 {#4570
date: 2023-12-28 21:22:02.0 +01:00
}
}
+root: App\Entity\EntryComment {#4456}
+body: """
Based on this [thread](https://serverfault.com/questions/569354/freenas-do-i-need-1gb-per-tb-of-usable-storage-or-1gb-of-memory-per-tb-of-phys) it’s the deduplication that requires a lot of RAM.\n
\n
See also: [wiki.freebsd.org/ZFSTuningGuide](https://wiki.freebsd.org/ZFSTuningGuide)\n
\n
Edit: from my understand the pool shouldn’t become inaccessible tho and only get slow. So there might be another issue.\n
\n
Edit2: here’s a guide to check whether your system is limited by zfs’ memory consumption: [github.com/openzfs/zfs/issues/10251](https://github.com/openzfs/zfs/issues/10251)
"""
+lang: "en"
+isAdult: false
+favouriteCount: 2
+score: 0
+lastActive: DateTime @1703800878 {#4651
date: 2023-12-28 23:01:18.0 +01:00
}
+ip: null
+tags: null
+mentions: [
"@isles@lemmy.world"
"@Cyber@feddit.uk"
"@isles@lemmy.world"
"@Cyber@feddit.uk"
"@isles@lemmy.world"
"@Cyber@feddit.uk"
]
+children: Doctrine\ORM\PersistentCollection {#4656 …}
+nested: Doctrine\ORM\PersistentCollection {#4658 …}
+votes: Doctrine\ORM\PersistentCollection {#4660 …}
+reports: Doctrine\ORM\PersistentCollection {#4662 …}
+favourites: Doctrine\ORM\PersistentCollection {#4664 …}
+notifications: Doctrine\ORM\PersistentCollection {#4666 …}
-id: 260559
-bodyTs: "'/openzfs/zfs/issues/10251](https://github.com/openzfs/zfs/issues/10251)':62 '/questions/569354/freenas-do-i-need-1gb-per-tb-of-usable-storage-or-1gb-of-memory-per-tb-of-phys)':7 '/zfstuningguide](https://wiki.freebsd.org/zfstuningguide)':22 'also':19 'anoth':42 'base':1 'becom':31 'check':50 'consumpt':59 'dedupl':11 'edit':23 'edit2':44 'get':36 'github.com':61 'github.com/openzfs/zfs/issues/10251](https://github.com/openzfs/zfs/issues/10251)':60 'guid':48 'inaccess':32 'issu':43 'limit':55 'lot':15 'memori':58 'might':40 'pool':28 'ram':17 'requir':13 'see':18 'serverfault.com':6 'serverfault.com/questions/569354/freenas-do-i-need-1gb-per-tb-of-usable-storage-or-1gb-of-memory-per-tb-of-phys)':5 'shouldn':29 'slow':37 'system':53 'tho':33 'thread':4 'understand':26 'whether':51 'wiki.freebsd.org':21 'wiki.freebsd.org/zfstuningguide](https://wiki.freebsd.org/zfstuningguide)':20 'zfs':57"
+ranking: 0
+commentCount: 0
+upVotes: 0
+downVotes: 0
+visibility: "visible "
+apId: "https://sh.itjust.works/comment/6902309"
+editedAt: DateTimeImmutable @1708835372 {#4652
date: 2024-02-25 05:29:32.0 +01:00
}
+createdAt: DateTimeImmutable @1703800878 {#4653
date: 2023-12-28 23:01:18.0 +01:00
}
} |
|
Show voter details
|
64 |
DENIED
|
moderate
|
App\Entity\EntryComment {#4654
+user: Proxies\__CG__\App\Entity\User {#4655 …}
+entry: App\Entity\Entry {#2412
+user: Proxies\__CG__\App\Entity\User {#1970 …}
+magazine: App\Entity\Magazine {#266
+icon: Proxies\__CG__\App\Entity\Image {#247 …}
+name: "selfhosted@lemmy.world"
+title: "selfhosted"
+description: """
A place to share alternatives to popular online services that can be self-hosted without giving up privacy or locking you into a service you don’t control.\n
\n
Rules:\n
\n
- Be civil: we’re here to support and learn from one another. Insults won’t be tolerated. Flame wars are frowned upon.\n
- No spam posting.\n
- Don’t duplicate the full text of your blog or github here. Just post the link for folks to click.\n
- Submission headline should match the article title (don’t cherry-pick information from the title to fit your agenda).\n
- No trolling.\n
\n
Resources:\n
\n
- [awesome-selfhosted software](https://github.com/awesome-selfhosted/awesome-selfhosted)\n
- [awesome-sysadmin](https://github.com/awesome-foss/awesome-sysadmin) resources\n
- [Self-Hosted Podcast from Jupiter Broadcasting](https://selfhosted.show)\n
\n
> Any issues on the community? Report it using the report flag.\n
\n
> Questions? DM the mods!
"""
+rules: null
+subscriptionsCount: 1
+entryCount: 222
+entryCommentCount: 3916
+postCount: 0
+postCommentCount: 0
+isAdult: false
+customCss: null
+lastActive: DateTime @1729582735 {#276
date: 2024-10-22 09:38:55.0 +02:00
}
+markedForDeletionAt: null
+tags: null
+moderators: Doctrine\ORM\PersistentCollection {#238 …}
+ownershipRequests: Doctrine\ORM\PersistentCollection {#234 …}
+moderatorRequests: Doctrine\ORM\PersistentCollection {#223 …}
+entries: Doctrine\ORM\PersistentCollection {#181 …}
+posts: Doctrine\ORM\PersistentCollection {#139 …}
+subscriptions: Doctrine\ORM\PersistentCollection {#201 …}
+bans: Doctrine\ORM\PersistentCollection {#118 …}
+reports: Doctrine\ORM\PersistentCollection {#104 …}
+badges: Doctrine\ORM\PersistentCollection {#82 …}
+logs: Doctrine\ORM\PersistentCollection {#72 …}
+awards: Doctrine\ORM\PersistentCollection {#61 …}
+categories: Doctrine\ORM\PersistentCollection {#1820 …}
-id: 120
+apId: "selfhosted@lemmy.world"
+apProfileId: "https://lemmy.world/c/selfhosted"
+apPublicUrl: "https://lemmy.world/c/selfhosted"
+apFollowersUrl: "https://lemmy.world/c/selfhosted/followers"
+apInboxUrl: "https://lemmy.world/inbox"
+apDomain: "lemmy.world"
+apPreferredUsername: "selfhosted"
+apDiscoverable: true
+apManuallyApprovesFollowers: null
+privateKey: null
+publicKey: null
+apFetchedAt: DateTime @1703473826 {#270
date: 2023-12-25 04:10:26.0 +01:00
}
+apDeletedAt: null
+apTimeoutAt: null
+visibility: "visible "
+createdAt: DateTimeImmutable @1703473826 {#272
date: 2023-12-25 04:10:26.0 +01:00
}
}
+image: null
+domain: Proxies\__CG__\App\Entity\Domain {#1915 …}
+slug: "Question-ZFS-and-rsync"
+title: "Question - ZFS and rsync"
+url: null
+body: """
Hey fellow Selfhosters! I need some help, I think, and searching isn’t yielding what I’m hoping for.\n
\n
I recently built a new NAS for my network with 4x 18TB drives in a ZFS raidz1 pool. I previously have been using an external USB 12TB harddrive attached to a different machine.\n
\n
I’ve been attempting to use rsync to get the 12TB drive copied over to the new pool and things go great for the first 30-45 minutes. At that point, the current copy speed diminishes and 4 current files in progress sit at 100% done. Eventually, I’ve had to reboot the machine, because the zpool doesn’t appear accessible any longer. After reboot, the pool appears fine, no faults, and I can resume rsync for a while.\n
\n
EDIT: Of note, the rsync process seems to stall and I can’t get it to respect SIGINT or Ctrl+C. I can SSH in separately and running `zpool status` hangs with no output.\n
\n
While the workaround seems to be partially successful, the point of using rsync is to make it fairly hands-free and it’s been a week long process to copy the 3TB that I have now. I don’t think my zpool should be disappearing like that! Makes me nervous about the long-term viability. I don’t think I’m ready to drop down on Unraid.\n
\n
rsync is being initiated from the NAS to copy from the old server, am I better off “pushing” than “pulling”? I can’t imagine it’d make much difference.\n
\n
Could my drives be bad? How could I tell? They’re attached to a 10 port SATA card, could that be defective? How would I tell?\n
\n
Thanks for any help! I’ve dabbled in linux for a long time, but I’m far from proficient, so I don’t really know the intricacies of dmesg et al.
"""
+type: "article"
+lang: "en"
+isOc: false
+hasEmbed: false
+commentCount: 17
+favouriteCount: 36
+score: 0
+isAdult: false
+sticky: false
+lastActive: DateTime @1711558563 {#2410
date: 2024-03-27 17:56:03.0 +01:00
}
+ip: null
+adaAmount: 0
+tags: null
+mentions: null
+comments: Doctrine\ORM\PersistentCollection {#1884 …}
+votes: Doctrine\ORM\PersistentCollection {#1973 …}
+reports: Doctrine\ORM\PersistentCollection {#1959 …}
+favourites: Doctrine\ORM\PersistentCollection {#1927 …}
+notifications: Doctrine\ORM\PersistentCollection {#2442 …}
+badges: Doctrine\ORM\PersistentCollection {#2440 …}
+children: []
-id: 25403
-titleTs: "'question':1 'rsync':4 'zfs':2"
-bodyTs: "'-45':79 '10':278 '100':97 '12tb':46,63 '18tb':31 '30':78 '3tb':198 '4':90 '4x':30 'access':113 'al':320 'appear':112,120 'attach':48,275 'attempt':56 'bad':268 'better':250 'built':22 'c':152 'card':281 'copi':65,86,196,243 'could':264,270,282 'ctrl':151 'current':85,91 'd':260 'dabbl':296 'defect':285 'differ':51,263 'diminish':88 'disappear':211 'dmesg':318 'doesn':110 'done':98 'drive':32,64,266 'drop':231 'edit':132 'et':319 'eventu':99 'extern':44 'fair':183 'far':306 'fault':123 'fellow':2 'file':92 'fine':121 'first':77 'free':186 'get':61,145 'go':73 'great':74 'hand':185 'hands-fre':184 'hang':162 'harddriv':47 'help':7,293 'hey':1 'hope':18 'imagin':258 'initi':238 'intricaci':316 'isn':12 'know':314 'like':212 'linux':298 'long':193,220,301 'long-term':219 'longer':115 'm':17,228,305 'machin':52,106 'make':181,214,261 'minut':80 'much':262 'nas':25,241 'need':5 'nervous':216 'network':28 'new':24,69 'note':134 'old':246 'output':165 'partial':172 'point':83,175 'pool':37,70,119 'port':279 'previous':39 'process':137,194 'profici':308 'progress':94 'pull':254 'push':252 'raidz1':36 're':274 'readi':229 'realli':313 'reboot':104,117 'recent':21 'respect':148 'resum':127 'rsync':59,128,136,178,235 'run':159 'sata':280 'search':11 'seem':138,169 'selfhost':3 'separ':157 'server':247 'sigint':149 'sit':95 'speed':87 'ssh':155 'stall':140 'status':161 'success':173 'tell':272,289 'term':221 'thank':290 'thing':72 'think':9,206,226 'time':302 'unraid':234 'usb':45 'use':42,58,177 've':54,101,295 'viabil':222 'week':192 'workaround':168 'would':287 'yield':14 'zfs':35 'zpool':109,160,208"
+cross: false
+upVotes: 0
+downVotes: 0
+ranking: 1703862554
+visibility: "visible "
+apId: "https://lemmy.world/post/10061829"
+editedAt: DateTimeImmutable @1708796667 {#1850
date: 2024-02-24 18:44:27.0 +01:00
}
+createdAt: DateTimeImmutable @1703776154 {#2420
date: 2023-12-28 16:09:14.0 +01:00
}
}
+magazine: App\Entity\Magazine {#266}
+image: null
+parent: App\Entity\EntryComment {#4571
+user: Proxies\__CG__\App\Entity\User {#1970 …}
+entry: App\Entity\Entry {#2412}
+magazine: App\Entity\Magazine {#266}
+image: null
+parent: App\Entity\EntryComment {#4456
+user: App\Entity\User {#4469 …}
+entry: App\Entity\Entry {#2412}
+magazine: App\Entity\Magazine {#266}
+image: null
+parent: null
+root: null
+body: """
I don’t have practical experience with ZFS, but my understanding is that it uses RAM a lot… if that’s new, it might be worth checking the RAM by booting up memtest (for example) and just ruling that out.\n
\n
Maybe also worth watching the system with `nmon` or `htop` (running in another `tmux` / `screen` pane) at the beginning of the next session, then when you think it’s jammed up, see what looks different…
"""
+lang: "en"
+isAdult: false
+favouriteCount: 1
+score: 0
+lastActive: DateTime @1711282496 {#4451
date: 2024-03-24 13:14:56.0 +01:00
}
+ip: null
+tags: null
+mentions: [
"@isles@lemmy.world"
]
+children: Doctrine\ORM\PersistentCollection {#4457 …}
+nested: Doctrine\ORM\PersistentCollection {#4459 …}
+votes: Doctrine\ORM\PersistentCollection {#4461 …}
+reports: Doctrine\ORM\PersistentCollection {#4463 …}
+favourites: Doctrine\ORM\PersistentCollection {#4465 …}
+notifications: Doctrine\ORM\PersistentCollection {#4467 …}
-id: 260307
-bodyTs: "'also':42 'anoth':53 'begin':59 'boot':31 'check':27 'differ':75 'exampl':35 'experi':6 'htop':50 'jam':70 'look':74 'lot':18 'mayb':41 'memtest':33 'might':24 'new':22 'next':62 'nmon':48 'pane':56 'practic':5 'ram':16,29 'rule':38 'run':51 'screen':55 'see':72 'session':63 'system':46 'think':67 'tmux':54 'understand':11 'use':15 'watch':44 'worth':26,43 'zfs':8"
+ranking: 0
+commentCount: 0
+upVotes: 0
+downVotes: 0
+visibility: "visible "
+apId: "https://feddit.uk/comment/5443383"
+editedAt: null
+createdAt: DateTimeImmutable @1703794455 {#4452
date: 2023-12-28 21:14:15.0 +01:00
}
}
+root: App\Entity\EntryComment {#4456}
+body: """
Awesome, thanks for giving some clues. It’s a new build, but I didn’t focus hugely on RAM, I think it’s only 32GB. I’ll try this out.\n
\n
Edit: I did some reading about L2ARC, so pending some of these tests, I’m planning to get up to 64gb ram and then extend with an l2arc SSD, assuming no other hardware errors.
"""
+lang: "en"
+isAdult: false
+favouriteCount: 0
+score: 0
+lastActive: DateTime @1703794922 {#4568
date: 2023-12-28 21:22:02.0 +01:00
}
+ip: null
+tags: null
+mentions: [
"@isles@lemmy.world"
"@Cyber@feddit.uk"
"@isles@lemmy.world"
"@Cyber@feddit.uk"
]
+children: Doctrine\ORM\PersistentCollection {#4572 …}
+nested: Doctrine\ORM\PersistentCollection {#4574 …}
+votes: Doctrine\ORM\PersistentCollection {#4576 …}
+reports: Doctrine\ORM\PersistentCollection {#4578 …}
+favourites: Doctrine\ORM\PersistentCollection {#4580 …}
+notifications: Doctrine\ORM\PersistentCollection {#4582 …}
-id: 260330
-bodyTs: "'32gb':25 '64gb':51 'assum':60 'awesom':1 'build':11 'clue':6 'didn':14 'edit':31 'error':64 'extend':55 'focus':16 'get':48 'give':4 'hardwar':63 'huge':17 'l2arc':37,58 'll':27 'm':45 'new':10 'pend':39 'plan':46 'ram':19,52 'read':35 'ssd':59 'test':43 'thank':2 'think':21 'tri':28"
+ranking: 0
+commentCount: 0
+upVotes: 0
+downVotes: 0
+visibility: "visible "
+apId: "https://lemmy.world/comment/6281743"
+editedAt: DateTimeImmutable @1708816354 {#4569
date: 2024-02-25 00:12:34.0 +01:00
}
+createdAt: DateTimeImmutable @1703794922 {#4570
date: 2023-12-28 21:22:02.0 +01:00
}
}
+root: App\Entity\EntryComment {#4456}
+body: """
Based on this [thread](https://serverfault.com/questions/569354/freenas-do-i-need-1gb-per-tb-of-usable-storage-or-1gb-of-memory-per-tb-of-phys) it’s the deduplication that requires a lot of RAM.\n
\n
See also: [wiki.freebsd.org/ZFSTuningGuide](https://wiki.freebsd.org/ZFSTuningGuide)\n
\n
Edit: from my understand the pool shouldn’t become inaccessible tho and only get slow. So there might be another issue.\n
\n
Edit2: here’s a guide to check whether your system is limited by zfs’ memory consumption: [github.com/openzfs/zfs/issues/10251](https://github.com/openzfs/zfs/issues/10251)
"""
+lang: "en"
+isAdult: false
+favouriteCount: 2
+score: 0
+lastActive: DateTime @1703800878 {#4651
date: 2023-12-28 23:01:18.0 +01:00
}
+ip: null
+tags: null
+mentions: [
"@isles@lemmy.world"
"@Cyber@feddit.uk"
"@isles@lemmy.world"
"@Cyber@feddit.uk"
"@isles@lemmy.world"
"@Cyber@feddit.uk"
]
+children: Doctrine\ORM\PersistentCollection {#4656 …}
+nested: Doctrine\ORM\PersistentCollection {#4658 …}
+votes: Doctrine\ORM\PersistentCollection {#4660 …}
+reports: Doctrine\ORM\PersistentCollection {#4662 …}
+favourites: Doctrine\ORM\PersistentCollection {#4664 …}
+notifications: Doctrine\ORM\PersistentCollection {#4666 …}
-id: 260559
-bodyTs: "'/openzfs/zfs/issues/10251](https://github.com/openzfs/zfs/issues/10251)':62 '/questions/569354/freenas-do-i-need-1gb-per-tb-of-usable-storage-or-1gb-of-memory-per-tb-of-phys)':7 '/zfstuningguide](https://wiki.freebsd.org/zfstuningguide)':22 'also':19 'anoth':42 'base':1 'becom':31 'check':50 'consumpt':59 'dedupl':11 'edit':23 'edit2':44 'get':36 'github.com':61 'github.com/openzfs/zfs/issues/10251](https://github.com/openzfs/zfs/issues/10251)':60 'guid':48 'inaccess':32 'issu':43 'limit':55 'lot':15 'memori':58 'might':40 'pool':28 'ram':17 'requir':13 'see':18 'serverfault.com':6 'serverfault.com/questions/569354/freenas-do-i-need-1gb-per-tb-of-usable-storage-or-1gb-of-memory-per-tb-of-phys)':5 'shouldn':29 'slow':37 'system':53 'tho':33 'thread':4 'understand':26 'whether':51 'wiki.freebsd.org':21 'wiki.freebsd.org/zfstuningguide](https://wiki.freebsd.org/zfstuningguide)':20 'zfs':57"
+ranking: 0
+commentCount: 0
+upVotes: 0
+downVotes: 0
+visibility: "visible "
+apId: "https://sh.itjust.works/comment/6902309"
+editedAt: DateTimeImmutable @1708835372 {#4652
date: 2024-02-25 05:29:32.0 +01:00
}
+createdAt: DateTimeImmutable @1703800878 {#4653
date: 2023-12-28 23:01:18.0 +01:00
}
} |
|
Show voter details
|
65 |
DENIED
|
ROLE_USER
|
null |
|
Show voter details
|
66 |
DENIED
|
moderate
|
App\Entity\EntryComment {#4669
+user: App\Entity\User {#4469 …}
+entry: App\Entity\Entry {#2412
+user: Proxies\__CG__\App\Entity\User {#1970 …}
+magazine: App\Entity\Magazine {#266
+icon: Proxies\__CG__\App\Entity\Image {#247 …}
+name: "selfhosted@lemmy.world"
+title: "selfhosted"
+description: """
A place to share alternatives to popular online services that can be self-hosted without giving up privacy or locking you into a service you don’t control.\n
\n
Rules:\n
\n
- Be civil: we’re here to support and learn from one another. Insults won’t be tolerated. Flame wars are frowned upon.\n
- No spam posting.\n
- Don’t duplicate the full text of your blog or github here. Just post the link for folks to click.\n
- Submission headline should match the article title (don’t cherry-pick information from the title to fit your agenda).\n
- No trolling.\n
\n
Resources:\n
\n
- [awesome-selfhosted software](https://github.com/awesome-selfhosted/awesome-selfhosted)\n
- [awesome-sysadmin](https://github.com/awesome-foss/awesome-sysadmin) resources\n
- [Self-Hosted Podcast from Jupiter Broadcasting](https://selfhosted.show)\n
\n
> Any issues on the community? Report it using the report flag.\n
\n
> Questions? DM the mods!
"""
+rules: null
+subscriptionsCount: 1
+entryCount: 222
+entryCommentCount: 3916
+postCount: 0
+postCommentCount: 0
+isAdult: false
+customCss: null
+lastActive: DateTime @1729582735 {#276
date: 2024-10-22 09:38:55.0 +02:00
}
+markedForDeletionAt: null
+tags: null
+moderators: Doctrine\ORM\PersistentCollection {#238 …}
+ownershipRequests: Doctrine\ORM\PersistentCollection {#234 …}
+moderatorRequests: Doctrine\ORM\PersistentCollection {#223 …}
+entries: Doctrine\ORM\PersistentCollection {#181 …}
+posts: Doctrine\ORM\PersistentCollection {#139 …}
+subscriptions: Doctrine\ORM\PersistentCollection {#201 …}
+bans: Doctrine\ORM\PersistentCollection {#118 …}
+reports: Doctrine\ORM\PersistentCollection {#104 …}
+badges: Doctrine\ORM\PersistentCollection {#82 …}
+logs: Doctrine\ORM\PersistentCollection {#72 …}
+awards: Doctrine\ORM\PersistentCollection {#61 …}
+categories: Doctrine\ORM\PersistentCollection {#1820 …}
-id: 120
+apId: "selfhosted@lemmy.world"
+apProfileId: "https://lemmy.world/c/selfhosted"
+apPublicUrl: "https://lemmy.world/c/selfhosted"
+apFollowersUrl: "https://lemmy.world/c/selfhosted/followers"
+apInboxUrl: "https://lemmy.world/inbox"
+apDomain: "lemmy.world"
+apPreferredUsername: "selfhosted"
+apDiscoverable: true
+apManuallyApprovesFollowers: null
+privateKey: null
+publicKey: null
+apFetchedAt: DateTime @1703473826 {#270
date: 2023-12-25 04:10:26.0 +01:00
}
+apDeletedAt: null
+apTimeoutAt: null
+visibility: "visible "
+createdAt: DateTimeImmutable @1703473826 {#272
date: 2023-12-25 04:10:26.0 +01:00
}
}
+image: null
+domain: Proxies\__CG__\App\Entity\Domain {#1915 …}
+slug: "Question-ZFS-and-rsync"
+title: "Question - ZFS and rsync"
+url: null
+body: """
Hey fellow Selfhosters! I need some help, I think, and searching isn’t yielding what I’m hoping for.\n
\n
I recently built a new NAS for my network with 4x 18TB drives in a ZFS raidz1 pool. I previously have been using an external USB 12TB harddrive attached to a different machine.\n
\n
I’ve been attempting to use rsync to get the 12TB drive copied over to the new pool and things go great for the first 30-45 minutes. At that point, the current copy speed diminishes and 4 current files in progress sit at 100% done. Eventually, I’ve had to reboot the machine, because the zpool doesn’t appear accessible any longer. After reboot, the pool appears fine, no faults, and I can resume rsync for a while.\n
\n
EDIT: Of note, the rsync process seems to stall and I can’t get it to respect SIGINT or Ctrl+C. I can SSH in separately and running `zpool status` hangs with no output.\n
\n
While the workaround seems to be partially successful, the point of using rsync is to make it fairly hands-free and it’s been a week long process to copy the 3TB that I have now. I don’t think my zpool should be disappearing like that! Makes me nervous about the long-term viability. I don’t think I’m ready to drop down on Unraid.\n
\n
rsync is being initiated from the NAS to copy from the old server, am I better off “pushing” than “pulling”? I can’t imagine it’d make much difference.\n
\n
Could my drives be bad? How could I tell? They’re attached to a 10 port SATA card, could that be defective? How would I tell?\n
\n
Thanks for any help! I’ve dabbled in linux for a long time, but I’m far from proficient, so I don’t really know the intricacies of dmesg et al.
"""
+type: "article"
+lang: "en"
+isOc: false
+hasEmbed: false
+commentCount: 17
+favouriteCount: 36
+score: 0
+isAdult: false
+sticky: false
+lastActive: DateTime @1711558563 {#2410
date: 2024-03-27 17:56:03.0 +01:00
}
+ip: null
+adaAmount: 0
+tags: null
+mentions: null
+comments: Doctrine\ORM\PersistentCollection {#1884 …}
+votes: Doctrine\ORM\PersistentCollection {#1973 …}
+reports: Doctrine\ORM\PersistentCollection {#1959 …}
+favourites: Doctrine\ORM\PersistentCollection {#1927 …}
+notifications: Doctrine\ORM\PersistentCollection {#2442 …}
+badges: Doctrine\ORM\PersistentCollection {#2440 …}
+children: []
-id: 25403
-titleTs: "'question':1 'rsync':4 'zfs':2"
-bodyTs: "'-45':79 '10':278 '100':97 '12tb':46,63 '18tb':31 '30':78 '3tb':198 '4':90 '4x':30 'access':113 'al':320 'appear':112,120 'attach':48,275 'attempt':56 'bad':268 'better':250 'built':22 'c':152 'card':281 'copi':65,86,196,243 'could':264,270,282 'ctrl':151 'current':85,91 'd':260 'dabbl':296 'defect':285 'differ':51,263 'diminish':88 'disappear':211 'dmesg':318 'doesn':110 'done':98 'drive':32,64,266 'drop':231 'edit':132 'et':319 'eventu':99 'extern':44 'fair':183 'far':306 'fault':123 'fellow':2 'file':92 'fine':121 'first':77 'free':186 'get':61,145 'go':73 'great':74 'hand':185 'hands-fre':184 'hang':162 'harddriv':47 'help':7,293 'hey':1 'hope':18 'imagin':258 'initi':238 'intricaci':316 'isn':12 'know':314 'like':212 'linux':298 'long':193,220,301 'long-term':219 'longer':115 'm':17,228,305 'machin':52,106 'make':181,214,261 'minut':80 'much':262 'nas':25,241 'need':5 'nervous':216 'network':28 'new':24,69 'note':134 'old':246 'output':165 'partial':172 'point':83,175 'pool':37,70,119 'port':279 'previous':39 'process':137,194 'profici':308 'progress':94 'pull':254 'push':252 'raidz1':36 're':274 'readi':229 'realli':313 'reboot':104,117 'recent':21 'respect':148 'resum':127 'rsync':59,128,136,178,235 'run':159 'sata':280 'search':11 'seem':138,169 'selfhost':3 'separ':157 'server':247 'sigint':149 'sit':95 'speed':87 'ssh':155 'stall':140 'status':161 'success':173 'tell':272,289 'term':221 'thank':290 'thing':72 'think':9,206,226 'time':302 'unraid':234 'usb':45 'use':42,58,177 've':54,101,295 'viabil':222 'week':192 'workaround':168 'would':287 'yield':14 'zfs':35 'zpool':109,160,208"
+cross: false
+upVotes: 0
+downVotes: 0
+ranking: 1703862554
+visibility: "visible "
+apId: "https://lemmy.world/post/10061829"
+editedAt: DateTimeImmutable @1708796667 {#1850
date: 2024-02-24 18:44:27.0 +01:00
}
+createdAt: DateTimeImmutable @1703776154 {#2420
date: 2023-12-28 16:09:14.0 +01:00
}
}
+magazine: App\Entity\Magazine {#266}
+image: null
+parent: App\Entity\EntryComment {#4654
+user: Proxies\__CG__\App\Entity\User {#4655 …}
+entry: App\Entity\Entry {#2412}
+magazine: App\Entity\Magazine {#266}
+image: null
+parent: App\Entity\EntryComment {#4571
+user: Proxies\__CG__\App\Entity\User {#1970 …}
+entry: App\Entity\Entry {#2412}
+magazine: App\Entity\Magazine {#266}
+image: null
+parent: App\Entity\EntryComment {#4456
+user: App\Entity\User {#4469 …}
+entry: App\Entity\Entry {#2412}
+magazine: App\Entity\Magazine {#266}
+image: null
+parent: null
+root: null
+body: """
I don’t have practical experience with ZFS, but my understanding is that it uses RAM a lot… if that’s new, it might be worth checking the RAM by booting up memtest (for example) and just ruling that out.\n
\n
Maybe also worth watching the system with `nmon` or `htop` (running in another `tmux` / `screen` pane) at the beginning of the next session, then when you think it’s jammed up, see what looks different…
"""
+lang: "en"
+isAdult: false
+favouriteCount: 1
+score: 0
+lastActive: DateTime @1711282496 {#4451
date: 2024-03-24 13:14:56.0 +01:00
}
+ip: null
+tags: null
+mentions: [
"@isles@lemmy.world"
]
+children: Doctrine\ORM\PersistentCollection {#4457 …}
+nested: Doctrine\ORM\PersistentCollection {#4459 …}
+votes: Doctrine\ORM\PersistentCollection {#4461 …}
+reports: Doctrine\ORM\PersistentCollection {#4463 …}
+favourites: Doctrine\ORM\PersistentCollection {#4465 …}
+notifications: Doctrine\ORM\PersistentCollection {#4467 …}
-id: 260307
-bodyTs: "'also':42 'anoth':53 'begin':59 'boot':31 'check':27 'differ':75 'exampl':35 'experi':6 'htop':50 'jam':70 'look':74 'lot':18 'mayb':41 'memtest':33 'might':24 'new':22 'next':62 'nmon':48 'pane':56 'practic':5 'ram':16,29 'rule':38 'run':51 'screen':55 'see':72 'session':63 'system':46 'think':67 'tmux':54 'understand':11 'use':15 'watch':44 'worth':26,43 'zfs':8"
+ranking: 0
+commentCount: 0
+upVotes: 0
+downVotes: 0
+visibility: "visible "
+apId: "https://feddit.uk/comment/5443383"
+editedAt: null
+createdAt: DateTimeImmutable @1703794455 {#4452
date: 2023-12-28 21:14:15.0 +01:00
}
}
+root: App\Entity\EntryComment {#4456}
+body: """
Awesome, thanks for giving some clues. It’s a new build, but I didn’t focus hugely on RAM, I think it’s only 32GB. I’ll try this out.\n
\n
Edit: I did some reading about L2ARC, so pending some of these tests, I’m planning to get up to 64gb ram and then extend with an l2arc SSD, assuming no other hardware errors.
"""
+lang: "en"
+isAdult: false
+favouriteCount: 0
+score: 0
+lastActive: DateTime @1703794922 {#4568
date: 2023-12-28 21:22:02.0 +01:00
}
+ip: null
+tags: null
+mentions: [
"@isles@lemmy.world"
"@Cyber@feddit.uk"
"@isles@lemmy.world"
"@Cyber@feddit.uk"
]
+children: Doctrine\ORM\PersistentCollection {#4572 …}
+nested: Doctrine\ORM\PersistentCollection {#4574 …}
+votes: Doctrine\ORM\PersistentCollection {#4576 …}
+reports: Doctrine\ORM\PersistentCollection {#4578 …}
+favourites: Doctrine\ORM\PersistentCollection {#4580 …}
+notifications: Doctrine\ORM\PersistentCollection {#4582 …}
-id: 260330
-bodyTs: "'32gb':25 '64gb':51 'assum':60 'awesom':1 'build':11 'clue':6 'didn':14 'edit':31 'error':64 'extend':55 'focus':16 'get':48 'give':4 'hardwar':63 'huge':17 'l2arc':37,58 'll':27 'm':45 'new':10 'pend':39 'plan':46 'ram':19,52 'read':35 'ssd':59 'test':43 'thank':2 'think':21 'tri':28"
+ranking: 0
+commentCount: 0
+upVotes: 0
+downVotes: 0
+visibility: "visible "
+apId: "https://lemmy.world/comment/6281743"
+editedAt: DateTimeImmutable @1708816354 {#4569
date: 2024-02-25 00:12:34.0 +01:00
}
+createdAt: DateTimeImmutable @1703794922 {#4570
date: 2023-12-28 21:22:02.0 +01:00
}
}
+root: App\Entity\EntryComment {#4456}
+body: """
Based on this [thread](https://serverfault.com/questions/569354/freenas-do-i-need-1gb-per-tb-of-usable-storage-or-1gb-of-memory-per-tb-of-phys) it’s the deduplication that requires a lot of RAM.\n
\n
See also: [wiki.freebsd.org/ZFSTuningGuide](https://wiki.freebsd.org/ZFSTuningGuide)\n
\n
Edit: from my understand the pool shouldn’t become inaccessible tho and only get slow. So there might be another issue.\n
\n
Edit2: here’s a guide to check whether your system is limited by zfs’ memory consumption: [github.com/openzfs/zfs/issues/10251](https://github.com/openzfs/zfs/issues/10251)
"""
+lang: "en"
+isAdult: false
+favouriteCount: 2
+score: 0
+lastActive: DateTime @1703800878 {#4651
date: 2023-12-28 23:01:18.0 +01:00
}
+ip: null
+tags: null
+mentions: [
"@isles@lemmy.world"
"@Cyber@feddit.uk"
"@isles@lemmy.world"
"@Cyber@feddit.uk"
"@isles@lemmy.world"
"@Cyber@feddit.uk"
]
+children: Doctrine\ORM\PersistentCollection {#4656 …}
+nested: Doctrine\ORM\PersistentCollection {#4658 …}
+votes: Doctrine\ORM\PersistentCollection {#4660 …}
+reports: Doctrine\ORM\PersistentCollection {#4662 …}
+favourites: Doctrine\ORM\PersistentCollection {#4664 …}
+notifications: Doctrine\ORM\PersistentCollection {#4666 …}
-id: 260559
-bodyTs: "'/openzfs/zfs/issues/10251](https://github.com/openzfs/zfs/issues/10251)':62 '/questions/569354/freenas-do-i-need-1gb-per-tb-of-usable-storage-or-1gb-of-memory-per-tb-of-phys)':7 '/zfstuningguide](https://wiki.freebsd.org/zfstuningguide)':22 'also':19 'anoth':42 'base':1 'becom':31 'check':50 'consumpt':59 'dedupl':11 'edit':23 'edit2':44 'get':36 'github.com':61 'github.com/openzfs/zfs/issues/10251](https://github.com/openzfs/zfs/issues/10251)':60 'guid':48 'inaccess':32 'issu':43 'limit':55 'lot':15 'memori':58 'might':40 'pool':28 'ram':17 'requir':13 'see':18 'serverfault.com':6 'serverfault.com/questions/569354/freenas-do-i-need-1gb-per-tb-of-usable-storage-or-1gb-of-memory-per-tb-of-phys)':5 'shouldn':29 'slow':37 'system':53 'tho':33 'thread':4 'understand':26 'whether':51 'wiki.freebsd.org':21 'wiki.freebsd.org/zfstuningguide](https://wiki.freebsd.org/zfstuningguide)':20 'zfs':57"
+ranking: 0
+commentCount: 0
+upVotes: 0
+downVotes: 0
+visibility: "visible "
+apId: "https://sh.itjust.works/comment/6902309"
+editedAt: DateTimeImmutable @1708835372 {#4652
date: 2024-02-25 05:29:32.0 +01:00
}
+createdAt: DateTimeImmutable @1703800878 {#4653
date: 2023-12-28 23:01:18.0 +01:00
}
}
+root: App\Entity\EntryComment {#4456}
+body: "Just another thought… Maybe just format the drives as a massive EXT4 JBOD (just for a temp test) and copy the data again - just to see if ZFS is the problem… maybe it’s something else altogether? Maybe - and I hope not - the USB source drive is failing after long reads?"
+lang: "en"
+isAdult: false
+favouriteCount: 3
+score: 0
+lastActive: DateTime @1703808110 {#4674
date: 2023-12-29 01:01:50.0 +01:00
}
+ip: null
+tags: null
+mentions: [
"@isles@lemmy.world"
"@Cyber@feddit.uk"
"@sonstwas@sh.itjust.works"
]
+children: Doctrine\ORM\PersistentCollection {#4677 …}
+nested: Doctrine\ORM\PersistentCollection {#4681 …}
+votes: Doctrine\ORM\PersistentCollection {#4683 …}
+reports: Doctrine\ORM\PersistentCollection {#4684 …}
+favourites: Doctrine\ORM\PersistentCollection {#4686 …}
+notifications: Doctrine\ORM\PersistentCollection {#4688 …}
-id: 260888
-bodyTs: "'altogeth':37 'anoth':2 'copi':20 'data':22 'drive':8,46 'els':36 'ext4':12 'fail':48 'format':6 'hope':41 'jbod':13 'long':50 'massiv':11 'mayb':4,32,38 'problem':31 'read':51 'see':26 'someth':35 'sourc':45 'temp':17 'test':18 'thought':3 'usb':44 'zfs':28"
+ranking: 0
+commentCount: 0
+upVotes: 0
+downVotes: 0
+visibility: "visible "
+apId: "https://feddit.uk/comment/5446265"
+editedAt: null
+createdAt: DateTimeImmutable @1703808110 {#4671
date: 2023-12-29 01:01:50.0 +01:00
}
} |
|
Show voter details
|
67 |
DENIED
|
edit
|
App\Entity\EntryComment {#4669
+user: App\Entity\User {#4469 …}
+entry: App\Entity\Entry {#2412
+user: Proxies\__CG__\App\Entity\User {#1970 …}
+magazine: App\Entity\Magazine {#266
+icon: Proxies\__CG__\App\Entity\Image {#247 …}
+name: "selfhosted@lemmy.world"
+title: "selfhosted"
+description: """
A place to share alternatives to popular online services that can be self-hosted without giving up privacy or locking you into a service you don’t control.\n
\n
Rules:\n
\n
- Be civil: we’re here to support and learn from one another. Insults won’t be tolerated. Flame wars are frowned upon.\n
- No spam posting.\n
- Don’t duplicate the full text of your blog or github here. Just post the link for folks to click.\n
- Submission headline should match the article title (don’t cherry-pick information from the title to fit your agenda).\n
- No trolling.\n
\n
Resources:\n
\n
- [awesome-selfhosted software](https://github.com/awesome-selfhosted/awesome-selfhosted)\n
- [awesome-sysadmin](https://github.com/awesome-foss/awesome-sysadmin) resources\n
- [Self-Hosted Podcast from Jupiter Broadcasting](https://selfhosted.show)\n
\n
> Any issues on the community? Report it using the report flag.\n
\n
> Questions? DM the mods!
"""
+rules: null
+subscriptionsCount: 1
+entryCount: 222
+entryCommentCount: 3916
+postCount: 0
+postCommentCount: 0
+isAdult: false
+customCss: null
+lastActive: DateTime @1729582735 {#276
date: 2024-10-22 09:38:55.0 +02:00
}
+markedForDeletionAt: null
+tags: null
+moderators: Doctrine\ORM\PersistentCollection {#238 …}
+ownershipRequests: Doctrine\ORM\PersistentCollection {#234 …}
+moderatorRequests: Doctrine\ORM\PersistentCollection {#223 …}
+entries: Doctrine\ORM\PersistentCollection {#181 …}
+posts: Doctrine\ORM\PersistentCollection {#139 …}
+subscriptions: Doctrine\ORM\PersistentCollection {#201 …}
+bans: Doctrine\ORM\PersistentCollection {#118 …}
+reports: Doctrine\ORM\PersistentCollection {#104 …}
+badges: Doctrine\ORM\PersistentCollection {#82 …}
+logs: Doctrine\ORM\PersistentCollection {#72 …}
+awards: Doctrine\ORM\PersistentCollection {#61 …}
+categories: Doctrine\ORM\PersistentCollection {#1820 …}
-id: 120
+apId: "selfhosted@lemmy.world"
+apProfileId: "https://lemmy.world/c/selfhosted"
+apPublicUrl: "https://lemmy.world/c/selfhosted"
+apFollowersUrl: "https://lemmy.world/c/selfhosted/followers"
+apInboxUrl: "https://lemmy.world/inbox"
+apDomain: "lemmy.world"
+apPreferredUsername: "selfhosted"
+apDiscoverable: true
+apManuallyApprovesFollowers: null
+privateKey: null
+publicKey: null
+apFetchedAt: DateTime @1703473826 {#270
date: 2023-12-25 04:10:26.0 +01:00
}
+apDeletedAt: null
+apTimeoutAt: null
+visibility: "visible "
+createdAt: DateTimeImmutable @1703473826 {#272
date: 2023-12-25 04:10:26.0 +01:00
}
}
+image: null
+domain: Proxies\__CG__\App\Entity\Domain {#1915 …}
+slug: "Question-ZFS-and-rsync"
+title: "Question - ZFS and rsync"
+url: null
+body: """
Hey fellow Selfhosters! I need some help, I think, and searching isn’t yielding what I’m hoping for.\n
\n
I recently built a new NAS for my network with 4x 18TB drives in a ZFS raidz1 pool. I previously have been using an external USB 12TB harddrive attached to a different machine.\n
\n
I’ve been attempting to use rsync to get the 12TB drive copied over to the new pool and things go great for the first 30-45 minutes. At that point, the current copy speed diminishes and 4 current files in progress sit at 100% done. Eventually, I’ve had to reboot the machine, because the zpool doesn’t appear accessible any longer. After reboot, the pool appears fine, no faults, and I can resume rsync for a while.\n
\n
EDIT: Of note, the rsync process seems to stall and I can’t get it to respect SIGINT or Ctrl+C. I can SSH in separately and running `zpool status` hangs with no output.\n
\n
While the workaround seems to be partially successful, the point of using rsync is to make it fairly hands-free and it’s been a week long process to copy the 3TB that I have now. I don’t think my zpool should be disappearing like that! Makes me nervous about the long-term viability. I don’t think I’m ready to drop down on Unraid.\n
\n
rsync is being initiated from the NAS to copy from the old server, am I better off “pushing” than “pulling”? I can’t imagine it’d make much difference.\n
\n
Could my drives be bad? How could I tell? They’re attached to a 10 port SATA card, could that be defective? How would I tell?\n
\n
Thanks for any help! I’ve dabbled in linux for a long time, but I’m far from proficient, so I don’t really know the intricacies of dmesg et al.
"""
+type: "article"
+lang: "en"
+isOc: false
+hasEmbed: false
+commentCount: 17
+favouriteCount: 36
+score: 0
+isAdult: false
+sticky: false
+lastActive: DateTime @1711558563 {#2410
date: 2024-03-27 17:56:03.0 +01:00
}
+ip: null
+adaAmount: 0
+tags: null
+mentions: null
+comments: Doctrine\ORM\PersistentCollection {#1884 …}
+votes: Doctrine\ORM\PersistentCollection {#1973 …}
+reports: Doctrine\ORM\PersistentCollection {#1959 …}
+favourites: Doctrine\ORM\PersistentCollection {#1927 …}
+notifications: Doctrine\ORM\PersistentCollection {#2442 …}
+badges: Doctrine\ORM\PersistentCollection {#2440 …}
+children: []
-id: 25403
-titleTs: "'question':1 'rsync':4 'zfs':2"
-bodyTs: "'-45':79 '10':278 '100':97 '12tb':46,63 '18tb':31 '30':78 '3tb':198 '4':90 '4x':30 'access':113 'al':320 'appear':112,120 'attach':48,275 'attempt':56 'bad':268 'better':250 'built':22 'c':152 'card':281 'copi':65,86,196,243 'could':264,270,282 'ctrl':151 'current':85,91 'd':260 'dabbl':296 'defect':285 'differ':51,263 'diminish':88 'disappear':211 'dmesg':318 'doesn':110 'done':98 'drive':32,64,266 'drop':231 'edit':132 'et':319 'eventu':99 'extern':44 'fair':183 'far':306 'fault':123 'fellow':2 'file':92 'fine':121 'first':77 'free':186 'get':61,145 'go':73 'great':74 'hand':185 'hands-fre':184 'hang':162 'harddriv':47 'help':7,293 'hey':1 'hope':18 'imagin':258 'initi':238 'intricaci':316 'isn':12 'know':314 'like':212 'linux':298 'long':193,220,301 'long-term':219 'longer':115 'm':17,228,305 'machin':52,106 'make':181,214,261 'minut':80 'much':262 'nas':25,241 'need':5 'nervous':216 'network':28 'new':24,69 'note':134 'old':246 'output':165 'partial':172 'point':83,175 'pool':37,70,119 'port':279 'previous':39 'process':137,194 'profici':308 'progress':94 'pull':254 'push':252 'raidz1':36 're':274 'readi':229 'realli':313 'reboot':104,117 'recent':21 'respect':148 'resum':127 'rsync':59,128,136,178,235 'run':159 'sata':280 'search':11 'seem':138,169 'selfhost':3 'separ':157 'server':247 'sigint':149 'sit':95 'speed':87 'ssh':155 'stall':140 'status':161 'success':173 'tell':272,289 'term':221 'thank':290 'thing':72 'think':9,206,226 'time':302 'unraid':234 'usb':45 'use':42,58,177 've':54,101,295 'viabil':222 'week':192 'workaround':168 'would':287 'yield':14 'zfs':35 'zpool':109,160,208"
+cross: false
+upVotes: 0
+downVotes: 0
+ranking: 1703862554
+visibility: "visible "
+apId: "https://lemmy.world/post/10061829"
+editedAt: DateTimeImmutable @1708796667 {#1850
date: 2024-02-24 18:44:27.0 +01:00
}
+createdAt: DateTimeImmutable @1703776154 {#2420
date: 2023-12-28 16:09:14.0 +01:00
}
}
+magazine: App\Entity\Magazine {#266}
+image: null
+parent: App\Entity\EntryComment {#4654
+user: Proxies\__CG__\App\Entity\User {#4655 …}
+entry: App\Entity\Entry {#2412}
+magazine: App\Entity\Magazine {#266}
+image: null
+parent: App\Entity\EntryComment {#4571
+user: Proxies\__CG__\App\Entity\User {#1970 …}
+entry: App\Entity\Entry {#2412}
+magazine: App\Entity\Magazine {#266}
+image: null
+parent: App\Entity\EntryComment {#4456
+user: App\Entity\User {#4469 …}
+entry: App\Entity\Entry {#2412}
+magazine: App\Entity\Magazine {#266}
+image: null
+parent: null
+root: null
+body: """
I don’t have practical experience with ZFS, but my understanding is that it uses RAM a lot… if that’s new, it might be worth checking the RAM by booting up memtest (for example) and just ruling that out.\n
\n
Maybe also worth watching the system with `nmon` or `htop` (running in another `tmux` / `screen` pane) at the beginning of the next session, then when you think it’s jammed up, see what looks different…
"""
+lang: "en"
+isAdult: false
+favouriteCount: 1
+score: 0
+lastActive: DateTime @1711282496 {#4451
date: 2024-03-24 13:14:56.0 +01:00
}
+ip: null
+tags: null
+mentions: [
"@isles@lemmy.world"
]
+children: Doctrine\ORM\PersistentCollection {#4457 …}
+nested: Doctrine\ORM\PersistentCollection {#4459 …}
+votes: Doctrine\ORM\PersistentCollection {#4461 …}
+reports: Doctrine\ORM\PersistentCollection {#4463 …}
+favourites: Doctrine\ORM\PersistentCollection {#4465 …}
+notifications: Doctrine\ORM\PersistentCollection {#4467 …}
-id: 260307
-bodyTs: "'also':42 'anoth':53 'begin':59 'boot':31 'check':27 'differ':75 'exampl':35 'experi':6 'htop':50 'jam':70 'look':74 'lot':18 'mayb':41 'memtest':33 'might':24 'new':22 'next':62 'nmon':48 'pane':56 'practic':5 'ram':16,29 'rule':38 'run':51 'screen':55 'see':72 'session':63 'system':46 'think':67 'tmux':54 'understand':11 'use':15 'watch':44 'worth':26,43 'zfs':8"
+ranking: 0
+commentCount: 0
+upVotes: 0
+downVotes: 0
+visibility: "visible "
+apId: "https://feddit.uk/comment/5443383"
+editedAt: null
+createdAt: DateTimeImmutable @1703794455 {#4452
date: 2023-12-28 21:14:15.0 +01:00
}
}
+root: App\Entity\EntryComment {#4456}
+body: """
Awesome, thanks for giving some clues. It’s a new build, but I didn’t focus hugely on RAM, I think it’s only 32GB. I’ll try this out.\n
\n
Edit: I did some reading about L2ARC, so pending some of these tests, I’m planning to get up to 64gb ram and then extend with an l2arc SSD, assuming no other hardware errors.
"""
+lang: "en"
+isAdult: false
+favouriteCount: 0
+score: 0
+lastActive: DateTime @1703794922 {#4568
date: 2023-12-28 21:22:02.0 +01:00
}
+ip: null
+tags: null
+mentions: [
"@isles@lemmy.world"
"@Cyber@feddit.uk"
"@isles@lemmy.world"
"@Cyber@feddit.uk"
]
+children: Doctrine\ORM\PersistentCollection {#4572 …}
+nested: Doctrine\ORM\PersistentCollection {#4574 …}
+votes: Doctrine\ORM\PersistentCollection {#4576 …}
+reports: Doctrine\ORM\PersistentCollection {#4578 …}
+favourites: Doctrine\ORM\PersistentCollection {#4580 …}
+notifications: Doctrine\ORM\PersistentCollection {#4582 …}
-id: 260330
-bodyTs: "'32gb':25 '64gb':51 'assum':60 'awesom':1 'build':11 'clue':6 'didn':14 'edit':31 'error':64 'extend':55 'focus':16 'get':48 'give':4 'hardwar':63 'huge':17 'l2arc':37,58 'll':27 'm':45 'new':10 'pend':39 'plan':46 'ram':19,52 'read':35 'ssd':59 'test':43 'thank':2 'think':21 'tri':28"
+ranking: 0
+commentCount: 0
+upVotes: 0
+downVotes: 0
+visibility: "visible "
+apId: "https://lemmy.world/comment/6281743"
+editedAt: DateTimeImmutable @1708816354 {#4569
date: 2024-02-25 00:12:34.0 +01:00
}
+createdAt: DateTimeImmutable @1703794922 {#4570
date: 2023-12-28 21:22:02.0 +01:00
}
}
+root: App\Entity\EntryComment {#4456}
+body: """
Based on this [thread](https://serverfault.com/questions/569354/freenas-do-i-need-1gb-per-tb-of-usable-storage-or-1gb-of-memory-per-tb-of-phys) it’s the deduplication that requires a lot of RAM.\n
\n
See also: [wiki.freebsd.org/ZFSTuningGuide](https://wiki.freebsd.org/ZFSTuningGuide)\n
\n
Edit: from my understand the pool shouldn’t become inaccessible tho and only get slow. So there might be another issue.\n
\n
Edit2: here’s a guide to check whether your system is limited by zfs’ memory consumption: [github.com/openzfs/zfs/issues/10251](https://github.com/openzfs/zfs/issues/10251)
"""
+lang: "en"
+isAdult: false
+favouriteCount: 2
+score: 0
+lastActive: DateTime @1703800878 {#4651
date: 2023-12-28 23:01:18.0 +01:00
}
+ip: null
+tags: null
+mentions: [
"@isles@lemmy.world"
"@Cyber@feddit.uk"
"@isles@lemmy.world"
"@Cyber@feddit.uk"
"@isles@lemmy.world"
"@Cyber@feddit.uk"
]
+children: Doctrine\ORM\PersistentCollection {#4656 …}
+nested: Doctrine\ORM\PersistentCollection {#4658 …}
+votes: Doctrine\ORM\PersistentCollection {#4660 …}
+reports: Doctrine\ORM\PersistentCollection {#4662 …}
+favourites: Doctrine\ORM\PersistentCollection {#4664 …}
+notifications: Doctrine\ORM\PersistentCollection {#4666 …}
-id: 260559
-bodyTs: "'/openzfs/zfs/issues/10251](https://github.com/openzfs/zfs/issues/10251)':62 '/questions/569354/freenas-do-i-need-1gb-per-tb-of-usable-storage-or-1gb-of-memory-per-tb-of-phys)':7 '/zfstuningguide](https://wiki.freebsd.org/zfstuningguide)':22 'also':19 'anoth':42 'base':1 'becom':31 'check':50 'consumpt':59 'dedupl':11 'edit':23 'edit2':44 'get':36 'github.com':61 'github.com/openzfs/zfs/issues/10251](https://github.com/openzfs/zfs/issues/10251)':60 'guid':48 'inaccess':32 'issu':43 'limit':55 'lot':15 'memori':58 'might':40 'pool':28 'ram':17 'requir':13 'see':18 'serverfault.com':6 'serverfault.com/questions/569354/freenas-do-i-need-1gb-per-tb-of-usable-storage-or-1gb-of-memory-per-tb-of-phys)':5 'shouldn':29 'slow':37 'system':53 'tho':33 'thread':4 'understand':26 'whether':51 'wiki.freebsd.org':21 'wiki.freebsd.org/zfstuningguide](https://wiki.freebsd.org/zfstuningguide)':20 'zfs':57"
+ranking: 0
+commentCount: 0
+upVotes: 0
+downVotes: 0
+visibility: "visible "
+apId: "https://sh.itjust.works/comment/6902309"
+editedAt: DateTimeImmutable @1708835372 {#4652
date: 2024-02-25 05:29:32.0 +01:00
}
+createdAt: DateTimeImmutable @1703800878 {#4653
date: 2023-12-28 23:01:18.0 +01:00
}
}
+root: App\Entity\EntryComment {#4456}
+body: "Just another thought… Maybe just format the drives as a massive EXT4 JBOD (just for a temp test) and copy the data again - just to see if ZFS is the problem… maybe it’s something else altogether? Maybe - and I hope not - the USB source drive is failing after long reads?"
+lang: "en"
+isAdult: false
+favouriteCount: 3
+score: 0
+lastActive: DateTime @1703808110 {#4674
date: 2023-12-29 01:01:50.0 +01:00
}
+ip: null
+tags: null
+mentions: [
"@isles@lemmy.world"
"@Cyber@feddit.uk"
"@sonstwas@sh.itjust.works"
]
+children: Doctrine\ORM\PersistentCollection {#4677 …}
+nested: Doctrine\ORM\PersistentCollection {#4681 …}
+votes: Doctrine\ORM\PersistentCollection {#4683 …}
+reports: Doctrine\ORM\PersistentCollection {#4684 …}
+favourites: Doctrine\ORM\PersistentCollection {#4686 …}
+notifications: Doctrine\ORM\PersistentCollection {#4688 …}
-id: 260888
-bodyTs: "'altogeth':37 'anoth':2 'copi':20 'data':22 'drive':8,46 'els':36 'ext4':12 'fail':48 'format':6 'hope':41 'jbod':13 'long':50 'massiv':11 'mayb':4,32,38 'problem':31 'read':51 'see':26 'someth':35 'sourc':45 'temp':17 'test':18 'thought':3 'usb':44 'zfs':28"
+ranking: 0
+commentCount: 0
+upVotes: 0
+downVotes: 0
+visibility: "visible "
+apId: "https://feddit.uk/comment/5446265"
+editedAt: null
+createdAt: DateTimeImmutable @1703808110 {#4671
date: 2023-12-29 01:01:50.0 +01:00
}
} |
|
Show voter details
|
68 |
DENIED
|
moderate
|
App\Entity\EntryComment {#4669
+user: App\Entity\User {#4469 …}
+entry: App\Entity\Entry {#2412
+user: Proxies\__CG__\App\Entity\User {#1970 …}
+magazine: App\Entity\Magazine {#266
+icon: Proxies\__CG__\App\Entity\Image {#247 …}
+name: "selfhosted@lemmy.world"
+title: "selfhosted"
+description: """
A place to share alternatives to popular online services that can be self-hosted without giving up privacy or locking you into a service you don’t control.\n
\n
Rules:\n
\n
- Be civil: we’re here to support and learn from one another. Insults won’t be tolerated. Flame wars are frowned upon.\n
- No spam posting.\n
- Don’t duplicate the full text of your blog or github here. Just post the link for folks to click.\n
- Submission headline should match the article title (don’t cherry-pick information from the title to fit your agenda).\n
- No trolling.\n
\n
Resources:\n
\n
- [awesome-selfhosted software](https://github.com/awesome-selfhosted/awesome-selfhosted)\n
- [awesome-sysadmin](https://github.com/awesome-foss/awesome-sysadmin) resources\n
- [Self-Hosted Podcast from Jupiter Broadcasting](https://selfhosted.show)\n
\n
> Any issues on the community? Report it using the report flag.\n
\n
> Questions? DM the mods!
"""
+rules: null
+subscriptionsCount: 1
+entryCount: 222
+entryCommentCount: 3916
+postCount: 0
+postCommentCount: 0
+isAdult: false
+customCss: null
+lastActive: DateTime @1729582735 {#276
date: 2024-10-22 09:38:55.0 +02:00
}
+markedForDeletionAt: null
+tags: null
+moderators: Doctrine\ORM\PersistentCollection {#238 …}
+ownershipRequests: Doctrine\ORM\PersistentCollection {#234 …}
+moderatorRequests: Doctrine\ORM\PersistentCollection {#223 …}
+entries: Doctrine\ORM\PersistentCollection {#181 …}
+posts: Doctrine\ORM\PersistentCollection {#139 …}
+subscriptions: Doctrine\ORM\PersistentCollection {#201 …}
+bans: Doctrine\ORM\PersistentCollection {#118 …}
+reports: Doctrine\ORM\PersistentCollection {#104 …}
+badges: Doctrine\ORM\PersistentCollection {#82 …}
+logs: Doctrine\ORM\PersistentCollection {#72 …}
+awards: Doctrine\ORM\PersistentCollection {#61 …}
+categories: Doctrine\ORM\PersistentCollection {#1820 …}
-id: 120
+apId: "selfhosted@lemmy.world"
+apProfileId: "https://lemmy.world/c/selfhosted"
+apPublicUrl: "https://lemmy.world/c/selfhosted"
+apFollowersUrl: "https://lemmy.world/c/selfhosted/followers"
+apInboxUrl: "https://lemmy.world/inbox"
+apDomain: "lemmy.world"
+apPreferredUsername: "selfhosted"
+apDiscoverable: true
+apManuallyApprovesFollowers: null
+privateKey: null
+publicKey: null
+apFetchedAt: DateTime @1703473826 {#270
date: 2023-12-25 04:10:26.0 +01:00
}
+apDeletedAt: null
+apTimeoutAt: null
+visibility: "visible "
+createdAt: DateTimeImmutable @1703473826 {#272
date: 2023-12-25 04:10:26.0 +01:00
}
}
+image: null
+domain: Proxies\__CG__\App\Entity\Domain {#1915 …}
+slug: "Question-ZFS-and-rsync"
+title: "Question - ZFS and rsync"
+url: null
+body: """
Hey fellow Selfhosters! I need some help, I think, and searching isn’t yielding what I’m hoping for.\n
\n
I recently built a new NAS for my network with 4x 18TB drives in a ZFS raidz1 pool. I previously have been using an external USB 12TB harddrive attached to a different machine.\n
\n
I’ve been attempting to use rsync to get the 12TB drive copied over to the new pool and things go great for the first 30-45 minutes. At that point, the current copy speed diminishes and 4 current files in progress sit at 100% done. Eventually, I’ve had to reboot the machine, because the zpool doesn’t appear accessible any longer. After reboot, the pool appears fine, no faults, and I can resume rsync for a while.\n
\n
EDIT: Of note, the rsync process seems to stall and I can’t get it to respect SIGINT or Ctrl+C. I can SSH in separately and running `zpool status` hangs with no output.\n
\n
While the workaround seems to be partially successful, the point of using rsync is to make it fairly hands-free and it’s been a week long process to copy the 3TB that I have now. I don’t think my zpool should be disappearing like that! Makes me nervous about the long-term viability. I don’t think I’m ready to drop down on Unraid.\n
\n
rsync is being initiated from the NAS to copy from the old server, am I better off “pushing” than “pulling”? I can’t imagine it’d make much difference.\n
\n
Could my drives be bad? How could I tell? They’re attached to a 10 port SATA card, could that be defective? How would I tell?\n
\n
Thanks for any help! I’ve dabbled in linux for a long time, but I’m far from proficient, so I don’t really know the intricacies of dmesg et al.
"""
+type: "article"
+lang: "en"
+isOc: false
+hasEmbed: false
+commentCount: 17
+favouriteCount: 36
+score: 0
+isAdult: false
+sticky: false
+lastActive: DateTime @1711558563 {#2410
date: 2024-03-27 17:56:03.0 +01:00
}
+ip: null
+adaAmount: 0
+tags: null
+mentions: null
+comments: Doctrine\ORM\PersistentCollection {#1884 …}
+votes: Doctrine\ORM\PersistentCollection {#1973 …}
+reports: Doctrine\ORM\PersistentCollection {#1959 …}
+favourites: Doctrine\ORM\PersistentCollection {#1927 …}
+notifications: Doctrine\ORM\PersistentCollection {#2442 …}
+badges: Doctrine\ORM\PersistentCollection {#2440 …}
+children: []
-id: 25403
-titleTs: "'question':1 'rsync':4 'zfs':2"
-bodyTs: "'-45':79 '10':278 '100':97 '12tb':46,63 '18tb':31 '30':78 '3tb':198 '4':90 '4x':30 'access':113 'al':320 'appear':112,120 'attach':48,275 'attempt':56 'bad':268 'better':250 'built':22 'c':152 'card':281 'copi':65,86,196,243 'could':264,270,282 'ctrl':151 'current':85,91 'd':260 'dabbl':296 'defect':285 'differ':51,263 'diminish':88 'disappear':211 'dmesg':318 'doesn':110 'done':98 'drive':32,64,266 'drop':231 'edit':132 'et':319 'eventu':99 'extern':44 'fair':183 'far':306 'fault':123 'fellow':2 'file':92 'fine':121 'first':77 'free':186 'get':61,145 'go':73 'great':74 'hand':185 'hands-fre':184 'hang':162 'harddriv':47 'help':7,293 'hey':1 'hope':18 'imagin':258 'initi':238 'intricaci':316 'isn':12 'know':314 'like':212 'linux':298 'long':193,220,301 'long-term':219 'longer':115 'm':17,228,305 'machin':52,106 'make':181,214,261 'minut':80 'much':262 'nas':25,241 'need':5 'nervous':216 'network':28 'new':24,69 'note':134 'old':246 'output':165 'partial':172 'point':83,175 'pool':37,70,119 'port':279 'previous':39 'process':137,194 'profici':308 'progress':94 'pull':254 'push':252 'raidz1':36 're':274 'readi':229 'realli':313 'reboot':104,117 'recent':21 'respect':148 'resum':127 'rsync':59,128,136,178,235 'run':159 'sata':280 'search':11 'seem':138,169 'selfhost':3 'separ':157 'server':247 'sigint':149 'sit':95 'speed':87 'ssh':155 'stall':140 'status':161 'success':173 'tell':272,289 'term':221 'thank':290 'thing':72 'think':9,206,226 'time':302 'unraid':234 'usb':45 'use':42,58,177 've':54,101,295 'viabil':222 'week':192 'workaround':168 'would':287 'yield':14 'zfs':35 'zpool':109,160,208"
+cross: false
+upVotes: 0
+downVotes: 0
+ranking: 1703862554
+visibility: "visible "
+apId: "https://lemmy.world/post/10061829"
+editedAt: DateTimeImmutable @1708796667 {#1850
date: 2024-02-24 18:44:27.0 +01:00
}
+createdAt: DateTimeImmutable @1703776154 {#2420
date: 2023-12-28 16:09:14.0 +01:00
}
}
+magazine: App\Entity\Magazine {#266}
+image: null
+parent: App\Entity\EntryComment {#4654
+user: Proxies\__CG__\App\Entity\User {#4655 …}
+entry: App\Entity\Entry {#2412}
+magazine: App\Entity\Magazine {#266}
+image: null
+parent: App\Entity\EntryComment {#4571
+user: Proxies\__CG__\App\Entity\User {#1970 …}
+entry: App\Entity\Entry {#2412}
+magazine: App\Entity\Magazine {#266}
+image: null
+parent: App\Entity\EntryComment {#4456
+user: App\Entity\User {#4469 …}
+entry: App\Entity\Entry {#2412}
+magazine: App\Entity\Magazine {#266}
+image: null
+parent: null
+root: null
+body: """
I don’t have practical experience with ZFS, but my understanding is that it uses RAM a lot… if that’s new, it might be worth checking the RAM by booting up memtest (for example) and just ruling that out.\n
\n
Maybe also worth watching the system with `nmon` or `htop` (running in another `tmux` / `screen` pane) at the beginning of the next session, then when you think it’s jammed up, see what looks different…
"""
+lang: "en"
+isAdult: false
+favouriteCount: 1
+score: 0
+lastActive: DateTime @1711282496 {#4451
date: 2024-03-24 13:14:56.0 +01:00
}
+ip: null
+tags: null
+mentions: [
"@isles@lemmy.world"
]
+children: Doctrine\ORM\PersistentCollection {#4457 …}
+nested: Doctrine\ORM\PersistentCollection {#4459 …}
+votes: Doctrine\ORM\PersistentCollection {#4461 …}
+reports: Doctrine\ORM\PersistentCollection {#4463 …}
+favourites: Doctrine\ORM\PersistentCollection {#4465 …}
+notifications: Doctrine\ORM\PersistentCollection {#4467 …}
-id: 260307
-bodyTs: "'also':42 'anoth':53 'begin':59 'boot':31 'check':27 'differ':75 'exampl':35 'experi':6 'htop':50 'jam':70 'look':74 'lot':18 'mayb':41 'memtest':33 'might':24 'new':22 'next':62 'nmon':48 'pane':56 'practic':5 'ram':16,29 'rule':38 'run':51 'screen':55 'see':72 'session':63 'system':46 'think':67 'tmux':54 'understand':11 'use':15 'watch':44 'worth':26,43 'zfs':8"
+ranking: 0
+commentCount: 0
+upVotes: 0
+downVotes: 0
+visibility: "visible "
+apId: "https://feddit.uk/comment/5443383"
+editedAt: null
+createdAt: DateTimeImmutable @1703794455 {#4452
date: 2023-12-28 21:14:15.0 +01:00
}
}
+root: App\Entity\EntryComment {#4456}
+body: """
Awesome, thanks for giving some clues. It’s a new build, but I didn’t focus hugely on RAM, I think it’s only 32GB. I’ll try this out.\n
\n
Edit: I did some reading about L2ARC, so pending some of these tests, I’m planning to get up to 64gb ram and then extend with an l2arc SSD, assuming no other hardware errors.
"""
+lang: "en"
+isAdult: false
+favouriteCount: 0
+score: 0
+lastActive: DateTime @1703794922 {#4568
date: 2023-12-28 21:22:02.0 +01:00
}
+ip: null
+tags: null
+mentions: [
"@isles@lemmy.world"
"@Cyber@feddit.uk"
"@isles@lemmy.world"
"@Cyber@feddit.uk"
]
+children: Doctrine\ORM\PersistentCollection {#4572 …}
+nested: Doctrine\ORM\PersistentCollection {#4574 …}
+votes: Doctrine\ORM\PersistentCollection {#4576 …}
+reports: Doctrine\ORM\PersistentCollection {#4578 …}
+favourites: Doctrine\ORM\PersistentCollection {#4580 …}
+notifications: Doctrine\ORM\PersistentCollection {#4582 …}
-id: 260330
-bodyTs: "'32gb':25 '64gb':51 'assum':60 'awesom':1 'build':11 'clue':6 'didn':14 'edit':31 'error':64 'extend':55 'focus':16 'get':48 'give':4 'hardwar':63 'huge':17 'l2arc':37,58 'll':27 'm':45 'new':10 'pend':39 'plan':46 'ram':19,52 'read':35 'ssd':59 'test':43 'thank':2 'think':21 'tri':28"
+ranking: 0
+commentCount: 0
+upVotes: 0
+downVotes: 0
+visibility: "visible "
+apId: "https://lemmy.world/comment/6281743"
+editedAt: DateTimeImmutable @1708816354 {#4569
date: 2024-02-25 00:12:34.0 +01:00
}
+createdAt: DateTimeImmutable @1703794922 {#4570
date: 2023-12-28 21:22:02.0 +01:00
}
}
+root: App\Entity\EntryComment {#4456}
+body: """
Based on this [thread](https://serverfault.com/questions/569354/freenas-do-i-need-1gb-per-tb-of-usable-storage-or-1gb-of-memory-per-tb-of-phys) it’s the deduplication that requires a lot of RAM.\n
\n
See also: [wiki.freebsd.org/ZFSTuningGuide](https://wiki.freebsd.org/ZFSTuningGuide)\n
\n
Edit: from my understand the pool shouldn’t become inaccessible tho and only get slow. So there might be another issue.\n
\n
Edit2: here’s a guide to check whether your system is limited by zfs’ memory consumption: [github.com/openzfs/zfs/issues/10251](https://github.com/openzfs/zfs/issues/10251)
"""
+lang: "en"
+isAdult: false
+favouriteCount: 2
+score: 0
+lastActive: DateTime @1703800878 {#4651
date: 2023-12-28 23:01:18.0 +01:00
}
+ip: null
+tags: null
+mentions: [
"@isles@lemmy.world"
"@Cyber@feddit.uk"
"@isles@lemmy.world"
"@Cyber@feddit.uk"
"@isles@lemmy.world"
"@Cyber@feddit.uk"
]
+children: Doctrine\ORM\PersistentCollection {#4656 …}
+nested: Doctrine\ORM\PersistentCollection {#4658 …}
+votes: Doctrine\ORM\PersistentCollection {#4660 …}
+reports: Doctrine\ORM\PersistentCollection {#4662 …}
+favourites: Doctrine\ORM\PersistentCollection {#4664 …}
+notifications: Doctrine\ORM\PersistentCollection {#4666 …}
-id: 260559
-bodyTs: "'/openzfs/zfs/issues/10251](https://github.com/openzfs/zfs/issues/10251)':62 '/questions/569354/freenas-do-i-need-1gb-per-tb-of-usable-storage-or-1gb-of-memory-per-tb-of-phys)':7 '/zfstuningguide](https://wiki.freebsd.org/zfstuningguide)':22 'also':19 'anoth':42 'base':1 'becom':31 'check':50 'consumpt':59 'dedupl':11 'edit':23 'edit2':44 'get':36 'github.com':61 'github.com/openzfs/zfs/issues/10251](https://github.com/openzfs/zfs/issues/10251)':60 'guid':48 'inaccess':32 'issu':43 'limit':55 'lot':15 'memori':58 'might':40 'pool':28 'ram':17 'requir':13 'see':18 'serverfault.com':6 'serverfault.com/questions/569354/freenas-do-i-need-1gb-per-tb-of-usable-storage-or-1gb-of-memory-per-tb-of-phys)':5 'shouldn':29 'slow':37 'system':53 'tho':33 'thread':4 'understand':26 'whether':51 'wiki.freebsd.org':21 'wiki.freebsd.org/zfstuningguide](https://wiki.freebsd.org/zfstuningguide)':20 'zfs':57"
+ranking: 0
+commentCount: 0
+upVotes: 0
+downVotes: 0
+visibility: "visible "
+apId: "https://sh.itjust.works/comment/6902309"
+editedAt: DateTimeImmutable @1708835372 {#4652
date: 2024-02-25 05:29:32.0 +01:00
}
+createdAt: DateTimeImmutable @1703800878 {#4653
date: 2023-12-28 23:01:18.0 +01:00
}
}
+root: App\Entity\EntryComment {#4456}
+body: "Just another thought… Maybe just format the drives as a massive EXT4 JBOD (just for a temp test) and copy the data again - just to see if ZFS is the problem… maybe it’s something else altogether? Maybe - and I hope not - the USB source drive is failing after long reads?"
+lang: "en"
+isAdult: false
+favouriteCount: 3
+score: 0
+lastActive: DateTime @1703808110 {#4674
date: 2023-12-29 01:01:50.0 +01:00
}
+ip: null
+tags: null
+mentions: [
"@isles@lemmy.world"
"@Cyber@feddit.uk"
"@sonstwas@sh.itjust.works"
]
+children: Doctrine\ORM\PersistentCollection {#4677 …}
+nested: Doctrine\ORM\PersistentCollection {#4681 …}
+votes: Doctrine\ORM\PersistentCollection {#4683 …}
+reports: Doctrine\ORM\PersistentCollection {#4684 …}
+favourites: Doctrine\ORM\PersistentCollection {#4686 …}
+notifications: Doctrine\ORM\PersistentCollection {#4688 …}
-id: 260888
-bodyTs: "'altogeth':37 'anoth':2 'copi':20 'data':22 'drive':8,46 'els':36 'ext4':12 'fail':48 'format':6 'hope':41 'jbod':13 'long':50 'massiv':11 'mayb':4,32,38 'problem':31 'read':51 'see':26 'someth':35 'sourc':45 'temp':17 'test':18 'thought':3 'usb':44 'zfs':28"
+ranking: 0
+commentCount: 0
+upVotes: 0
+downVotes: 0
+visibility: "visible "
+apId: "https://feddit.uk/comment/5446265"
+editedAt: null
+createdAt: DateTimeImmutable @1703808110 {#4671
date: 2023-12-29 01:01:50.0 +01:00
}
} |
|
Show voter details
|
69 |
DENIED
|
ROLE_USER
|
null |
|
Show voter details
|
70 |
DENIED
|
moderate
|
App\Entity\EntryComment {#4692
+user: Proxies\__CG__\App\Entity\User {#1970 …}
+entry: App\Entity\Entry {#2412
+user: Proxies\__CG__\App\Entity\User {#1970 …}
+magazine: App\Entity\Magazine {#266
+icon: Proxies\__CG__\App\Entity\Image {#247 …}
+name: "selfhosted@lemmy.world"
+title: "selfhosted"
+description: """
A place to share alternatives to popular online services that can be self-hosted without giving up privacy or locking you into a service you don’t control.\n
\n
Rules:\n
\n
- Be civil: we’re here to support and learn from one another. Insults won’t be tolerated. Flame wars are frowned upon.\n
- No spam posting.\n
- Don’t duplicate the full text of your blog or github here. Just post the link for folks to click.\n
- Submission headline should match the article title (don’t cherry-pick information from the title to fit your agenda).\n
- No trolling.\n
\n
Resources:\n
\n
- [awesome-selfhosted software](https://github.com/awesome-selfhosted/awesome-selfhosted)\n
- [awesome-sysadmin](https://github.com/awesome-foss/awesome-sysadmin) resources\n
- [Self-Hosted Podcast from Jupiter Broadcasting](https://selfhosted.show)\n
\n
> Any issues on the community? Report it using the report flag.\n
\n
> Questions? DM the mods!
"""
+rules: null
+subscriptionsCount: 1
+entryCount: 222
+entryCommentCount: 3916
+postCount: 0
+postCommentCount: 0
+isAdult: false
+customCss: null
+lastActive: DateTime @1729582735 {#276
date: 2024-10-22 09:38:55.0 +02:00
}
+markedForDeletionAt: null
+tags: null
+moderators: Doctrine\ORM\PersistentCollection {#238 …}
+ownershipRequests: Doctrine\ORM\PersistentCollection {#234 …}
+moderatorRequests: Doctrine\ORM\PersistentCollection {#223 …}
+entries: Doctrine\ORM\PersistentCollection {#181 …}
+posts: Doctrine\ORM\PersistentCollection {#139 …}
+subscriptions: Doctrine\ORM\PersistentCollection {#201 …}
+bans: Doctrine\ORM\PersistentCollection {#118 …}
+reports: Doctrine\ORM\PersistentCollection {#104 …}
+badges: Doctrine\ORM\PersistentCollection {#82 …}
+logs: Doctrine\ORM\PersistentCollection {#72 …}
+awards: Doctrine\ORM\PersistentCollection {#61 …}
+categories: Doctrine\ORM\PersistentCollection {#1820 …}
-id: 120
+apId: "selfhosted@lemmy.world"
+apProfileId: "https://lemmy.world/c/selfhosted"
+apPublicUrl: "https://lemmy.world/c/selfhosted"
+apFollowersUrl: "https://lemmy.world/c/selfhosted/followers"
+apInboxUrl: "https://lemmy.world/inbox"
+apDomain: "lemmy.world"
+apPreferredUsername: "selfhosted"
+apDiscoverable: true
+apManuallyApprovesFollowers: null
+privateKey: null
+publicKey: null
+apFetchedAt: DateTime @1703473826 {#270
date: 2023-12-25 04:10:26.0 +01:00
}
+apDeletedAt: null
+apTimeoutAt: null
+visibility: "visible "
+createdAt: DateTimeImmutable @1703473826 {#272
date: 2023-12-25 04:10:26.0 +01:00
}
}
+image: null
+domain: Proxies\__CG__\App\Entity\Domain {#1915 …}
+slug: "Question-ZFS-and-rsync"
+title: "Question - ZFS and rsync"
+url: null
+body: """
Hey fellow Selfhosters! I need some help, I think, and searching isn’t yielding what I’m hoping for.\n
\n
I recently built a new NAS for my network with 4x 18TB drives in a ZFS raidz1 pool. I previously have been using an external USB 12TB harddrive attached to a different machine.\n
\n
I’ve been attempting to use rsync to get the 12TB drive copied over to the new pool and things go great for the first 30-45 minutes. At that point, the current copy speed diminishes and 4 current files in progress sit at 100% done. Eventually, I’ve had to reboot the machine, because the zpool doesn’t appear accessible any longer. After reboot, the pool appears fine, no faults, and I can resume rsync for a while.\n
\n
EDIT: Of note, the rsync process seems to stall and I can’t get it to respect SIGINT or Ctrl+C. I can SSH in separately and running `zpool status` hangs with no output.\n
\n
While the workaround seems to be partially successful, the point of using rsync is to make it fairly hands-free and it’s been a week long process to copy the 3TB that I have now. I don’t think my zpool should be disappearing like that! Makes me nervous about the long-term viability. I don’t think I’m ready to drop down on Unraid.\n
\n
rsync is being initiated from the NAS to copy from the old server, am I better off “pushing” than “pulling”? I can’t imagine it’d make much difference.\n
\n
Could my drives be bad? How could I tell? They’re attached to a 10 port SATA card, could that be defective? How would I tell?\n
\n
Thanks for any help! I’ve dabbled in linux for a long time, but I’m far from proficient, so I don’t really know the intricacies of dmesg et al.
"""
+type: "article"
+lang: "en"
+isOc: false
+hasEmbed: false
+commentCount: 17
+favouriteCount: 36
+score: 0
+isAdult: false
+sticky: false
+lastActive: DateTime @1711558563 {#2410
date: 2024-03-27 17:56:03.0 +01:00
}
+ip: null
+adaAmount: 0
+tags: null
+mentions: null
+comments: Doctrine\ORM\PersistentCollection {#1884 …}
+votes: Doctrine\ORM\PersistentCollection {#1973 …}
+reports: Doctrine\ORM\PersistentCollection {#1959 …}
+favourites: Doctrine\ORM\PersistentCollection {#1927 …}
+notifications: Doctrine\ORM\PersistentCollection {#2442 …}
+badges: Doctrine\ORM\PersistentCollection {#2440 …}
+children: []
-id: 25403
-titleTs: "'question':1 'rsync':4 'zfs':2"
-bodyTs: "'-45':79 '10':278 '100':97 '12tb':46,63 '18tb':31 '30':78 '3tb':198 '4':90 '4x':30 'access':113 'al':320 'appear':112,120 'attach':48,275 'attempt':56 'bad':268 'better':250 'built':22 'c':152 'card':281 'copi':65,86,196,243 'could':264,270,282 'ctrl':151 'current':85,91 'd':260 'dabbl':296 'defect':285 'differ':51,263 'diminish':88 'disappear':211 'dmesg':318 'doesn':110 'done':98 'drive':32,64,266 'drop':231 'edit':132 'et':319 'eventu':99 'extern':44 'fair':183 'far':306 'fault':123 'fellow':2 'file':92 'fine':121 'first':77 'free':186 'get':61,145 'go':73 'great':74 'hand':185 'hands-fre':184 'hang':162 'harddriv':47 'help':7,293 'hey':1 'hope':18 'imagin':258 'initi':238 'intricaci':316 'isn':12 'know':314 'like':212 'linux':298 'long':193,220,301 'long-term':219 'longer':115 'm':17,228,305 'machin':52,106 'make':181,214,261 'minut':80 'much':262 'nas':25,241 'need':5 'nervous':216 'network':28 'new':24,69 'note':134 'old':246 'output':165 'partial':172 'point':83,175 'pool':37,70,119 'port':279 'previous':39 'process':137,194 'profici':308 'progress':94 'pull':254 'push':252 'raidz1':36 're':274 'readi':229 'realli':313 'reboot':104,117 'recent':21 'respect':148 'resum':127 'rsync':59,128,136,178,235 'run':159 'sata':280 'search':11 'seem':138,169 'selfhost':3 'separ':157 'server':247 'sigint':149 'sit':95 'speed':87 'ssh':155 'stall':140 'status':161 'success':173 'tell':272,289 'term':221 'thank':290 'thing':72 'think':9,206,226 'time':302 'unraid':234 'usb':45 'use':42,58,177 've':54,101,295 'viabil':222 'week':192 'workaround':168 'would':287 'yield':14 'zfs':35 'zpool':109,160,208"
+cross: false
+upVotes: 0
+downVotes: 0
+ranking: 1703862554
+visibility: "visible "
+apId: "https://lemmy.world/post/10061829"
+editedAt: DateTimeImmutable @1708796667 {#1850
date: 2024-02-24 18:44:27.0 +01:00
}
+createdAt: DateTimeImmutable @1703776154 {#2420
date: 2023-12-28 16:09:14.0 +01:00
}
}
+magazine: App\Entity\Magazine {#266}
+image: null
+parent: App\Entity\EntryComment {#4654
+user: Proxies\__CG__\App\Entity\User {#4655 …}
+entry: App\Entity\Entry {#2412}
+magazine: App\Entity\Magazine {#266}
+image: null
+parent: App\Entity\EntryComment {#4571
+user: Proxies\__CG__\App\Entity\User {#1970 …}
+entry: App\Entity\Entry {#2412}
+magazine: App\Entity\Magazine {#266}
+image: null
+parent: App\Entity\EntryComment {#4456
+user: App\Entity\User {#4469 …}
+entry: App\Entity\Entry {#2412}
+magazine: App\Entity\Magazine {#266}
+image: null
+parent: null
+root: null
+body: """
I don’t have practical experience with ZFS, but my understanding is that it uses RAM a lot… if that’s new, it might be worth checking the RAM by booting up memtest (for example) and just ruling that out.\n
\n
Maybe also worth watching the system with `nmon` or `htop` (running in another `tmux` / `screen` pane) at the beginning of the next session, then when you think it’s jammed up, see what looks different…
"""
+lang: "en"
+isAdult: false
+favouriteCount: 1
+score: 0
+lastActive: DateTime @1711282496 {#4451
date: 2024-03-24 13:14:56.0 +01:00
}
+ip: null
+tags: null
+mentions: [
"@isles@lemmy.world"
]
+children: Doctrine\ORM\PersistentCollection {#4457 …}
+nested: Doctrine\ORM\PersistentCollection {#4459 …}
+votes: Doctrine\ORM\PersistentCollection {#4461 …}
+reports: Doctrine\ORM\PersistentCollection {#4463 …}
+favourites: Doctrine\ORM\PersistentCollection {#4465 …}
+notifications: Doctrine\ORM\PersistentCollection {#4467 …}
-id: 260307
-bodyTs: "'also':42 'anoth':53 'begin':59 'boot':31 'check':27 'differ':75 'exampl':35 'experi':6 'htop':50 'jam':70 'look':74 'lot':18 'mayb':41 'memtest':33 'might':24 'new':22 'next':62 'nmon':48 'pane':56 'practic':5 'ram':16,29 'rule':38 'run':51 'screen':55 'see':72 'session':63 'system':46 'think':67 'tmux':54 'understand':11 'use':15 'watch':44 'worth':26,43 'zfs':8"
+ranking: 0
+commentCount: 0
+upVotes: 0
+downVotes: 0
+visibility: "visible "
+apId: "https://feddit.uk/comment/5443383"
+editedAt: null
+createdAt: DateTimeImmutable @1703794455 {#4452
date: 2023-12-28 21:14:15.0 +01:00
}
}
+root: App\Entity\EntryComment {#4456}
+body: """
Awesome, thanks for giving some clues. It’s a new build, but I didn’t focus hugely on RAM, I think it’s only 32GB. I’ll try this out.\n
\n
Edit: I did some reading about L2ARC, so pending some of these tests, I’m planning to get up to 64gb ram and then extend with an l2arc SSD, assuming no other hardware errors.
"""
+lang: "en"
+isAdult: false
+favouriteCount: 0
+score: 0
+lastActive: DateTime @1703794922 {#4568
date: 2023-12-28 21:22:02.0 +01:00
}
+ip: null
+tags: null
+mentions: [
"@isles@lemmy.world"
"@Cyber@feddit.uk"
"@isles@lemmy.world"
"@Cyber@feddit.uk"
]
+children: Doctrine\ORM\PersistentCollection {#4572 …}
+nested: Doctrine\ORM\PersistentCollection {#4574 …}
+votes: Doctrine\ORM\PersistentCollection {#4576 …}
+reports: Doctrine\ORM\PersistentCollection {#4578 …}
+favourites: Doctrine\ORM\PersistentCollection {#4580 …}
+notifications: Doctrine\ORM\PersistentCollection {#4582 …}
-id: 260330
-bodyTs: "'32gb':25 '64gb':51 'assum':60 'awesom':1 'build':11 'clue':6 'didn':14 'edit':31 'error':64 'extend':55 'focus':16 'get':48 'give':4 'hardwar':63 'huge':17 'l2arc':37,58 'll':27 'm':45 'new':10 'pend':39 'plan':46 'ram':19,52 'read':35 'ssd':59 'test':43 'thank':2 'think':21 'tri':28"
+ranking: 0
+commentCount: 0
+upVotes: 0
+downVotes: 0
+visibility: "visible "
+apId: "https://lemmy.world/comment/6281743"
+editedAt: DateTimeImmutable @1708816354 {#4569
date: 2024-02-25 00:12:34.0 +01:00
}
+createdAt: DateTimeImmutable @1703794922 {#4570
date: 2023-12-28 21:22:02.0 +01:00
}
}
+root: App\Entity\EntryComment {#4456}
+body: """
Based on this [thread](https://serverfault.com/questions/569354/freenas-do-i-need-1gb-per-tb-of-usable-storage-or-1gb-of-memory-per-tb-of-phys) it’s the deduplication that requires a lot of RAM.\n
\n
See also: [wiki.freebsd.org/ZFSTuningGuide](https://wiki.freebsd.org/ZFSTuningGuide)\n
\n
Edit: from my understand the pool shouldn’t become inaccessible tho and only get slow. So there might be another issue.\n
\n
Edit2: here’s a guide to check whether your system is limited by zfs’ memory consumption: [github.com/openzfs/zfs/issues/10251](https://github.com/openzfs/zfs/issues/10251)
"""
+lang: "en"
+isAdult: false
+favouriteCount: 2
+score: 0
+lastActive: DateTime @1703800878 {#4651
date: 2023-12-28 23:01:18.0 +01:00
}
+ip: null
+tags: null
+mentions: [
"@isles@lemmy.world"
"@Cyber@feddit.uk"
"@isles@lemmy.world"
"@Cyber@feddit.uk"
"@isles@lemmy.world"
"@Cyber@feddit.uk"
]
+children: Doctrine\ORM\PersistentCollection {#4656 …}
+nested: Doctrine\ORM\PersistentCollection {#4658 …}
+votes: Doctrine\ORM\PersistentCollection {#4660 …}
+reports: Doctrine\ORM\PersistentCollection {#4662 …}
+favourites: Doctrine\ORM\PersistentCollection {#4664 …}
+notifications: Doctrine\ORM\PersistentCollection {#4666 …}
-id: 260559
-bodyTs: "'/openzfs/zfs/issues/10251](https://github.com/openzfs/zfs/issues/10251)':62 '/questions/569354/freenas-do-i-need-1gb-per-tb-of-usable-storage-or-1gb-of-memory-per-tb-of-phys)':7 '/zfstuningguide](https://wiki.freebsd.org/zfstuningguide)':22 'also':19 'anoth':42 'base':1 'becom':31 'check':50 'consumpt':59 'dedupl':11 'edit':23 'edit2':44 'get':36 'github.com':61 'github.com/openzfs/zfs/issues/10251](https://github.com/openzfs/zfs/issues/10251)':60 'guid':48 'inaccess':32 'issu':43 'limit':55 'lot':15 'memori':58 'might':40 'pool':28 'ram':17 'requir':13 'see':18 'serverfault.com':6 'serverfault.com/questions/569354/freenas-do-i-need-1gb-per-tb-of-usable-storage-or-1gb-of-memory-per-tb-of-phys)':5 'shouldn':29 'slow':37 'system':53 'tho':33 'thread':4 'understand':26 'whether':51 'wiki.freebsd.org':21 'wiki.freebsd.org/zfstuningguide](https://wiki.freebsd.org/zfstuningguide)':20 'zfs':57"
+ranking: 0
+commentCount: 0
+upVotes: 0
+downVotes: 0
+visibility: "visible "
+apId: "https://sh.itjust.works/comment/6902309"
+editedAt: DateTimeImmutable @1708835372 {#4652
date: 2024-02-25 05:29:32.0 +01:00
}
+createdAt: DateTimeImmutable @1703800878 {#4653
date: 2023-12-28 23:01:18.0 +01:00
}
}
+root: App\Entity\EntryComment {#4456}
+body: "I believe there’s another issue. ZFS has been using nearly all RAM (which is fine, I only need RAM for system and ZFS anyway, there’s nothing else running on this box), but I was pretty convinced while I was looking that I don’t have dedup turned on. Thanks for your suggestions and links!"
+lang: "en"
+isAdult: false
+favouriteCount: 2
+score: 0
+lastActive: DateTime @1704301495 {#4690
date: 2024-01-03 18:04:55.0 +01:00
}
+ip: null
+tags: null
+mentions: [
"@isles@lemmy.world"
"@Cyber@feddit.uk"
"@sonstwas@sh.itjust.works"
]
+children: Doctrine\ORM\PersistentCollection {#4693 …}
+nested: Doctrine\ORM\PersistentCollection {#4695 …}
+votes: Doctrine\ORM\PersistentCollection {#4697 …}
+reports: Doctrine\ORM\PersistentCollection {#4699 …}
+favourites: Doctrine\ORM\PersistentCollection {#4701 …}
+notifications: Doctrine\ORM\PersistentCollection {#4703 …}
-id: 276944
-bodyTs: "'anoth':5 'anyway':25 'believ':2 'box':33 'convinc':38 'dedup':48 'els':29 'fine':16 'issu':6 'link':56 'look':42 'near':11 'need':19 'noth':28 'pretti':37 'ram':13,20 'run':30 'suggest':54 'system':22 'thank':51 'turn':49 'use':10 'zfs':7,24"
+ranking: 0
+commentCount: 0
+upVotes: 0
+downVotes: 0
+visibility: "visible "
+apId: "https://lemmy.world/comment/6383677"
+editedAt: null
+createdAt: DateTimeImmutable @1704301495 {#4691
date: 2024-01-03 18:04:55.0 +01:00
}
} |
|
Show voter details
|
71 |
DENIED
|
edit
|
App\Entity\EntryComment {#4692
+user: Proxies\__CG__\App\Entity\User {#1970 …}
+entry: App\Entity\Entry {#2412
+user: Proxies\__CG__\App\Entity\User {#1970 …}
+magazine: App\Entity\Magazine {#266
+icon: Proxies\__CG__\App\Entity\Image {#247 …}
+name: "selfhosted@lemmy.world"
+title: "selfhosted"
+description: """
A place to share alternatives to popular online services that can be self-hosted without giving up privacy or locking you into a service you don’t control.\n
\n
Rules:\n
\n
- Be civil: we’re here to support and learn from one another. Insults won’t be tolerated. Flame wars are frowned upon.\n
- No spam posting.\n
- Don’t duplicate the full text of your blog or github here. Just post the link for folks to click.\n
- Submission headline should match the article title (don’t cherry-pick information from the title to fit your agenda).\n
- No trolling.\n
\n
Resources:\n
\n
- [awesome-selfhosted software](https://github.com/awesome-selfhosted/awesome-selfhosted)\n
- [awesome-sysadmin](https://github.com/awesome-foss/awesome-sysadmin) resources\n
- [Self-Hosted Podcast from Jupiter Broadcasting](https://selfhosted.show)\n
\n
> Any issues on the community? Report it using the report flag.\n
\n
> Questions? DM the mods!
"""
+rules: null
+subscriptionsCount: 1
+entryCount: 222
+entryCommentCount: 3916
+postCount: 0
+postCommentCount: 0
+isAdult: false
+customCss: null
+lastActive: DateTime @1729582735 {#276
date: 2024-10-22 09:38:55.0 +02:00
}
+markedForDeletionAt: null
+tags: null
+moderators: Doctrine\ORM\PersistentCollection {#238 …}
+ownershipRequests: Doctrine\ORM\PersistentCollection {#234 …}
+moderatorRequests: Doctrine\ORM\PersistentCollection {#223 …}
+entries: Doctrine\ORM\PersistentCollection {#181 …}
+posts: Doctrine\ORM\PersistentCollection {#139 …}
+subscriptions: Doctrine\ORM\PersistentCollection {#201 …}
+bans: Doctrine\ORM\PersistentCollection {#118 …}
+reports: Doctrine\ORM\PersistentCollection {#104 …}
+badges: Doctrine\ORM\PersistentCollection {#82 …}
+logs: Doctrine\ORM\PersistentCollection {#72 …}
+awards: Doctrine\ORM\PersistentCollection {#61 …}
+categories: Doctrine\ORM\PersistentCollection {#1820 …}
-id: 120
+apId: "selfhosted@lemmy.world"
+apProfileId: "https://lemmy.world/c/selfhosted"
+apPublicUrl: "https://lemmy.world/c/selfhosted"
+apFollowersUrl: "https://lemmy.world/c/selfhosted/followers"
+apInboxUrl: "https://lemmy.world/inbox"
+apDomain: "lemmy.world"
+apPreferredUsername: "selfhosted"
+apDiscoverable: true
+apManuallyApprovesFollowers: null
+privateKey: null
+publicKey: null
+apFetchedAt: DateTime @1703473826 {#270
date: 2023-12-25 04:10:26.0 +01:00
}
+apDeletedAt: null
+apTimeoutAt: null
+visibility: "visible "
+createdAt: DateTimeImmutable @1703473826 {#272
date: 2023-12-25 04:10:26.0 +01:00
}
}
+image: null
+domain: Proxies\__CG__\App\Entity\Domain {#1915 …}
+slug: "Question-ZFS-and-rsync"
+title: "Question - ZFS and rsync"
+url: null
+body: """
Hey fellow Selfhosters! I need some help, I think, and searching isn’t yielding what I’m hoping for.\n
\n
I recently built a new NAS for my network with 4x 18TB drives in a ZFS raidz1 pool. I previously have been using an external USB 12TB harddrive attached to a different machine.\n
\n
I’ve been attempting to use rsync to get the 12TB drive copied over to the new pool and things go great for the first 30-45 minutes. At that point, the current copy speed diminishes and 4 current files in progress sit at 100% done. Eventually, I’ve had to reboot the machine, because the zpool doesn’t appear accessible any longer. After reboot, the pool appears fine, no faults, and I can resume rsync for a while.\n
\n
EDIT: Of note, the rsync process seems to stall and I can’t get it to respect SIGINT or Ctrl+C. I can SSH in separately and running `zpool status` hangs with no output.\n
\n
While the workaround seems to be partially successful, the point of using rsync is to make it fairly hands-free and it’s been a week long process to copy the 3TB that I have now. I don’t think my zpool should be disappearing like that! Makes me nervous about the long-term viability. I don’t think I’m ready to drop down on Unraid.\n
\n
rsync is being initiated from the NAS to copy from the old server, am I better off “pushing” than “pulling”? I can’t imagine it’d make much difference.\n
\n
Could my drives be bad? How could I tell? They’re attached to a 10 port SATA card, could that be defective? How would I tell?\n
\n
Thanks for any help! I’ve dabbled in linux for a long time, but I’m far from proficient, so I don’t really know the intricacies of dmesg et al.
"""
+type: "article"
+lang: "en"
+isOc: false
+hasEmbed: false
+commentCount: 17
+favouriteCount: 36
+score: 0
+isAdult: false
+sticky: false
+lastActive: DateTime @1711558563 {#2410
date: 2024-03-27 17:56:03.0 +01:00
}
+ip: null
+adaAmount: 0
+tags: null
+mentions: null
+comments: Doctrine\ORM\PersistentCollection {#1884 …}
+votes: Doctrine\ORM\PersistentCollection {#1973 …}
+reports: Doctrine\ORM\PersistentCollection {#1959 …}
+favourites: Doctrine\ORM\PersistentCollection {#1927 …}
+notifications: Doctrine\ORM\PersistentCollection {#2442 …}
+badges: Doctrine\ORM\PersistentCollection {#2440 …}
+children: []
-id: 25403
-titleTs: "'question':1 'rsync':4 'zfs':2"
-bodyTs: "'-45':79 '10':278 '100':97 '12tb':46,63 '18tb':31 '30':78 '3tb':198 '4':90 '4x':30 'access':113 'al':320 'appear':112,120 'attach':48,275 'attempt':56 'bad':268 'better':250 'built':22 'c':152 'card':281 'copi':65,86,196,243 'could':264,270,282 'ctrl':151 'current':85,91 'd':260 'dabbl':296 'defect':285 'differ':51,263 'diminish':88 'disappear':211 'dmesg':318 'doesn':110 'done':98 'drive':32,64,266 'drop':231 'edit':132 'et':319 'eventu':99 'extern':44 'fair':183 'far':306 'fault':123 'fellow':2 'file':92 'fine':121 'first':77 'free':186 'get':61,145 'go':73 'great':74 'hand':185 'hands-fre':184 'hang':162 'harddriv':47 'help':7,293 'hey':1 'hope':18 'imagin':258 'initi':238 'intricaci':316 'isn':12 'know':314 'like':212 'linux':298 'long':193,220,301 'long-term':219 'longer':115 'm':17,228,305 'machin':52,106 'make':181,214,261 'minut':80 'much':262 'nas':25,241 'need':5 'nervous':216 'network':28 'new':24,69 'note':134 'old':246 'output':165 'partial':172 'point':83,175 'pool':37,70,119 'port':279 'previous':39 'process':137,194 'profici':308 'progress':94 'pull':254 'push':252 'raidz1':36 're':274 'readi':229 'realli':313 'reboot':104,117 'recent':21 'respect':148 'resum':127 'rsync':59,128,136,178,235 'run':159 'sata':280 'search':11 'seem':138,169 'selfhost':3 'separ':157 'server':247 'sigint':149 'sit':95 'speed':87 'ssh':155 'stall':140 'status':161 'success':173 'tell':272,289 'term':221 'thank':290 'thing':72 'think':9,206,226 'time':302 'unraid':234 'usb':45 'use':42,58,177 've':54,101,295 'viabil':222 'week':192 'workaround':168 'would':287 'yield':14 'zfs':35 'zpool':109,160,208"
+cross: false
+upVotes: 0
+downVotes: 0
+ranking: 1703862554
+visibility: "visible "
+apId: "https://lemmy.world/post/10061829"
+editedAt: DateTimeImmutable @1708796667 {#1850
date: 2024-02-24 18:44:27.0 +01:00
}
+createdAt: DateTimeImmutable @1703776154 {#2420
date: 2023-12-28 16:09:14.0 +01:00
}
}
+magazine: App\Entity\Magazine {#266}
+image: null
+parent: App\Entity\EntryComment {#4654
+user: Proxies\__CG__\App\Entity\User {#4655 …}
+entry: App\Entity\Entry {#2412}
+magazine: App\Entity\Magazine {#266}
+image: null
+parent: App\Entity\EntryComment {#4571
+user: Proxies\__CG__\App\Entity\User {#1970 …}
+entry: App\Entity\Entry {#2412}
+magazine: App\Entity\Magazine {#266}
+image: null
+parent: App\Entity\EntryComment {#4456
+user: App\Entity\User {#4469 …}
+entry: App\Entity\Entry {#2412}
+magazine: App\Entity\Magazine {#266}
+image: null
+parent: null
+root: null
+body: """
I don’t have practical experience with ZFS, but my understanding is that it uses RAM a lot… if that’s new, it might be worth checking the RAM by booting up memtest (for example) and just ruling that out.\n
\n
Maybe also worth watching the system with `nmon` or `htop` (running in another `tmux` / `screen` pane) at the beginning of the next session, then when you think it’s jammed up, see what looks different…
"""
+lang: "en"
+isAdult: false
+favouriteCount: 1
+score: 0
+lastActive: DateTime @1711282496 {#4451
date: 2024-03-24 13:14:56.0 +01:00
}
+ip: null
+tags: null
+mentions: [
"@isles@lemmy.world"
]
+children: Doctrine\ORM\PersistentCollection {#4457 …}
+nested: Doctrine\ORM\PersistentCollection {#4459 …}
+votes: Doctrine\ORM\PersistentCollection {#4461 …}
+reports: Doctrine\ORM\PersistentCollection {#4463 …}
+favourites: Doctrine\ORM\PersistentCollection {#4465 …}
+notifications: Doctrine\ORM\PersistentCollection {#4467 …}
-id: 260307
-bodyTs: "'also':42 'anoth':53 'begin':59 'boot':31 'check':27 'differ':75 'exampl':35 'experi':6 'htop':50 'jam':70 'look':74 'lot':18 'mayb':41 'memtest':33 'might':24 'new':22 'next':62 'nmon':48 'pane':56 'practic':5 'ram':16,29 'rule':38 'run':51 'screen':55 'see':72 'session':63 'system':46 'think':67 'tmux':54 'understand':11 'use':15 'watch':44 'worth':26,43 'zfs':8"
+ranking: 0
+commentCount: 0
+upVotes: 0
+downVotes: 0
+visibility: "visible "
+apId: "https://feddit.uk/comment/5443383"
+editedAt: null
+createdAt: DateTimeImmutable @1703794455 {#4452
date: 2023-12-28 21:14:15.0 +01:00
}
}
+root: App\Entity\EntryComment {#4456}
+body: """
Awesome, thanks for giving some clues. It’s a new build, but I didn’t focus hugely on RAM, I think it’s only 32GB. I’ll try this out.\n
\n
Edit: I did some reading about L2ARC, so pending some of these tests, I’m planning to get up to 64gb ram and then extend with an l2arc SSD, assuming no other hardware errors.
"""
+lang: "en"
+isAdult: false
+favouriteCount: 0
+score: 0
+lastActive: DateTime @1703794922 {#4568
date: 2023-12-28 21:22:02.0 +01:00
}
+ip: null
+tags: null
+mentions: [
"@isles@lemmy.world"
"@Cyber@feddit.uk"
"@isles@lemmy.world"
"@Cyber@feddit.uk"
]
+children: Doctrine\ORM\PersistentCollection {#4572 …}
+nested: Doctrine\ORM\PersistentCollection {#4574 …}
+votes: Doctrine\ORM\PersistentCollection {#4576 …}
+reports: Doctrine\ORM\PersistentCollection {#4578 …}
+favourites: Doctrine\ORM\PersistentCollection {#4580 …}
+notifications: Doctrine\ORM\PersistentCollection {#4582 …}
-id: 260330
-bodyTs: "'32gb':25 '64gb':51 'assum':60 'awesom':1 'build':11 'clue':6 'didn':14 'edit':31 'error':64 'extend':55 'focus':16 'get':48 'give':4 'hardwar':63 'huge':17 'l2arc':37,58 'll':27 'm':45 'new':10 'pend':39 'plan':46 'ram':19,52 'read':35 'ssd':59 'test':43 'thank':2 'think':21 'tri':28"
+ranking: 0
+commentCount: 0
+upVotes: 0
+downVotes: 0
+visibility: "visible "
+apId: "https://lemmy.world/comment/6281743"
+editedAt: DateTimeImmutable @1708816354 {#4569
date: 2024-02-25 00:12:34.0 +01:00
}
+createdAt: DateTimeImmutable @1703794922 {#4570
date: 2023-12-28 21:22:02.0 +01:00
}
}
+root: App\Entity\EntryComment {#4456}
+body: """
Based on this [thread](https://serverfault.com/questions/569354/freenas-do-i-need-1gb-per-tb-of-usable-storage-or-1gb-of-memory-per-tb-of-phys) it’s the deduplication that requires a lot of RAM.\n
\n
See also: [wiki.freebsd.org/ZFSTuningGuide](https://wiki.freebsd.org/ZFSTuningGuide)\n
\n
Edit: from my understand the pool shouldn’t become inaccessible tho and only get slow. So there might be another issue.\n
\n
Edit2: here’s a guide to check whether your system is limited by zfs’ memory consumption: [github.com/openzfs/zfs/issues/10251](https://github.com/openzfs/zfs/issues/10251)
"""
+lang: "en"
+isAdult: false
+favouriteCount: 2
+score: 0
+lastActive: DateTime @1703800878 {#4651
date: 2023-12-28 23:01:18.0 +01:00
}
+ip: null
+tags: null
+mentions: [
"@isles@lemmy.world"
"@Cyber@feddit.uk"
"@isles@lemmy.world"
"@Cyber@feddit.uk"
"@isles@lemmy.world"
"@Cyber@feddit.uk"
]
+children: Doctrine\ORM\PersistentCollection {#4656 …}
+nested: Doctrine\ORM\PersistentCollection {#4658 …}
+votes: Doctrine\ORM\PersistentCollection {#4660 …}
+reports: Doctrine\ORM\PersistentCollection {#4662 …}
+favourites: Doctrine\ORM\PersistentCollection {#4664 …}
+notifications: Doctrine\ORM\PersistentCollection {#4666 …}
-id: 260559
-bodyTs: "'/openzfs/zfs/issues/10251](https://github.com/openzfs/zfs/issues/10251)':62 '/questions/569354/freenas-do-i-need-1gb-per-tb-of-usable-storage-or-1gb-of-memory-per-tb-of-phys)':7 '/zfstuningguide](https://wiki.freebsd.org/zfstuningguide)':22 'also':19 'anoth':42 'base':1 'becom':31 'check':50 'consumpt':59 'dedupl':11 'edit':23 'edit2':44 'get':36 'github.com':61 'github.com/openzfs/zfs/issues/10251](https://github.com/openzfs/zfs/issues/10251)':60 'guid':48 'inaccess':32 'issu':43 'limit':55 'lot':15 'memori':58 'might':40 'pool':28 'ram':17 'requir':13 'see':18 'serverfault.com':6 'serverfault.com/questions/569354/freenas-do-i-need-1gb-per-tb-of-usable-storage-or-1gb-of-memory-per-tb-of-phys)':5 'shouldn':29 'slow':37 'system':53 'tho':33 'thread':4 'understand':26 'whether':51 'wiki.freebsd.org':21 'wiki.freebsd.org/zfstuningguide](https://wiki.freebsd.org/zfstuningguide)':20 'zfs':57"
+ranking: 0
+commentCount: 0
+upVotes: 0
+downVotes: 0
+visibility: "visible "
+apId: "https://sh.itjust.works/comment/6902309"
+editedAt: DateTimeImmutable @1708835372 {#4652
date: 2024-02-25 05:29:32.0 +01:00
}
+createdAt: DateTimeImmutable @1703800878 {#4653
date: 2023-12-28 23:01:18.0 +01:00
}
}
+root: App\Entity\EntryComment {#4456}
+body: "I believe there’s another issue. ZFS has been using nearly all RAM (which is fine, I only need RAM for system and ZFS anyway, there’s nothing else running on this box), but I was pretty convinced while I was looking that I don’t have dedup turned on. Thanks for your suggestions and links!"
+lang: "en"
+isAdult: false
+favouriteCount: 2
+score: 0
+lastActive: DateTime @1704301495 {#4690
date: 2024-01-03 18:04:55.0 +01:00
}
+ip: null
+tags: null
+mentions: [
"@isles@lemmy.world"
"@Cyber@feddit.uk"
"@sonstwas@sh.itjust.works"
]
+children: Doctrine\ORM\PersistentCollection {#4693 …}
+nested: Doctrine\ORM\PersistentCollection {#4695 …}
+votes: Doctrine\ORM\PersistentCollection {#4697 …}
+reports: Doctrine\ORM\PersistentCollection {#4699 …}
+favourites: Doctrine\ORM\PersistentCollection {#4701 …}
+notifications: Doctrine\ORM\PersistentCollection {#4703 …}
-id: 276944
-bodyTs: "'anoth':5 'anyway':25 'believ':2 'box':33 'convinc':38 'dedup':48 'els':29 'fine':16 'issu':6 'link':56 'look':42 'near':11 'need':19 'noth':28 'pretti':37 'ram':13,20 'run':30 'suggest':54 'system':22 'thank':51 'turn':49 'use':10 'zfs':7,24"
+ranking: 0
+commentCount: 0
+upVotes: 0
+downVotes: 0
+visibility: "visible "
+apId: "https://lemmy.world/comment/6383677"
+editedAt: null
+createdAt: DateTimeImmutable @1704301495 {#4691
date: 2024-01-03 18:04:55.0 +01:00
}
} |
|
Show voter details
|
72 |
DENIED
|
moderate
|
App\Entity\EntryComment {#4692
+user: Proxies\__CG__\App\Entity\User {#1970 …}
+entry: App\Entity\Entry {#2412
+user: Proxies\__CG__\App\Entity\User {#1970 …}
+magazine: App\Entity\Magazine {#266
+icon: Proxies\__CG__\App\Entity\Image {#247 …}
+name: "selfhosted@lemmy.world"
+title: "selfhosted"
+description: """
A place to share alternatives to popular online services that can be self-hosted without giving up privacy or locking you into a service you don’t control.\n
\n
Rules:\n
\n
- Be civil: we’re here to support and learn from one another. Insults won’t be tolerated. Flame wars are frowned upon.\n
- No spam posting.\n
- Don’t duplicate the full text of your blog or github here. Just post the link for folks to click.\n
- Submission headline should match the article title (don’t cherry-pick information from the title to fit your agenda).\n
- No trolling.\n
\n
Resources:\n
\n
- [awesome-selfhosted software](https://github.com/awesome-selfhosted/awesome-selfhosted)\n
- [awesome-sysadmin](https://github.com/awesome-foss/awesome-sysadmin) resources\n
- [Self-Hosted Podcast from Jupiter Broadcasting](https://selfhosted.show)\n
\n
> Any issues on the community? Report it using the report flag.\n
\n
> Questions? DM the mods!
"""
+rules: null
+subscriptionsCount: 1
+entryCount: 222
+entryCommentCount: 3916
+postCount: 0
+postCommentCount: 0
+isAdult: false
+customCss: null
+lastActive: DateTime @1729582735 {#276
date: 2024-10-22 09:38:55.0 +02:00
}
+markedForDeletionAt: null
+tags: null
+moderators: Doctrine\ORM\PersistentCollection {#238 …}
+ownershipRequests: Doctrine\ORM\PersistentCollection {#234 …}
+moderatorRequests: Doctrine\ORM\PersistentCollection {#223 …}
+entries: Doctrine\ORM\PersistentCollection {#181 …}
+posts: Doctrine\ORM\PersistentCollection {#139 …}
+subscriptions: Doctrine\ORM\PersistentCollection {#201 …}
+bans: Doctrine\ORM\PersistentCollection {#118 …}
+reports: Doctrine\ORM\PersistentCollection {#104 …}
+badges: Doctrine\ORM\PersistentCollection {#82 …}
+logs: Doctrine\ORM\PersistentCollection {#72 …}
+awards: Doctrine\ORM\PersistentCollection {#61 …}
+categories: Doctrine\ORM\PersistentCollection {#1820 …}
-id: 120
+apId: "selfhosted@lemmy.world"
+apProfileId: "https://lemmy.world/c/selfhosted"
+apPublicUrl: "https://lemmy.world/c/selfhosted"
+apFollowersUrl: "https://lemmy.world/c/selfhosted/followers"
+apInboxUrl: "https://lemmy.world/inbox"
+apDomain: "lemmy.world"
+apPreferredUsername: "selfhosted"
+apDiscoverable: true
+apManuallyApprovesFollowers: null
+privateKey: null
+publicKey: null
+apFetchedAt: DateTime @1703473826 {#270
date: 2023-12-25 04:10:26.0 +01:00
}
+apDeletedAt: null
+apTimeoutAt: null
+visibility: "visible "
+createdAt: DateTimeImmutable @1703473826 {#272
date: 2023-12-25 04:10:26.0 +01:00
}
}
+image: null
+domain: Proxies\__CG__\App\Entity\Domain {#1915 …}
+slug: "Question-ZFS-and-rsync"
+title: "Question - ZFS and rsync"
+url: null
+body: """
Hey fellow Selfhosters! I need some help, I think, and searching isn’t yielding what I’m hoping for.\n
\n
I recently built a new NAS for my network with 4x 18TB drives in a ZFS raidz1 pool. I previously have been using an external USB 12TB harddrive attached to a different machine.\n
\n
I’ve been attempting to use rsync to get the 12TB drive copied over to the new pool and things go great for the first 30-45 minutes. At that point, the current copy speed diminishes and 4 current files in progress sit at 100% done. Eventually, I’ve had to reboot the machine, because the zpool doesn’t appear accessible any longer. After reboot, the pool appears fine, no faults, and I can resume rsync for a while.\n
\n
EDIT: Of note, the rsync process seems to stall and I can’t get it to respect SIGINT or Ctrl+C. I can SSH in separately and running `zpool status` hangs with no output.\n
\n
While the workaround seems to be partially successful, the point of using rsync is to make it fairly hands-free and it’s been a week long process to copy the 3TB that I have now. I don’t think my zpool should be disappearing like that! Makes me nervous about the long-term viability. I don’t think I’m ready to drop down on Unraid.\n
\n
rsync is being initiated from the NAS to copy from the old server, am I better off “pushing” than “pulling”? I can’t imagine it’d make much difference.\n
\n
Could my drives be bad? How could I tell? They’re attached to a 10 port SATA card, could that be defective? How would I tell?\n
\n
Thanks for any help! I’ve dabbled in linux for a long time, but I’m far from proficient, so I don’t really know the intricacies of dmesg et al.
"""
+type: "article"
+lang: "en"
+isOc: false
+hasEmbed: false
+commentCount: 17
+favouriteCount: 36
+score: 0
+isAdult: false
+sticky: false
+lastActive: DateTime @1711558563 {#2410
date: 2024-03-27 17:56:03.0 +01:00
}
+ip: null
+adaAmount: 0
+tags: null
+mentions: null
+comments: Doctrine\ORM\PersistentCollection {#1884 …}
+votes: Doctrine\ORM\PersistentCollection {#1973 …}
+reports: Doctrine\ORM\PersistentCollection {#1959 …}
+favourites: Doctrine\ORM\PersistentCollection {#1927 …}
+notifications: Doctrine\ORM\PersistentCollection {#2442 …}
+badges: Doctrine\ORM\PersistentCollection {#2440 …}
+children: []
-id: 25403
-titleTs: "'question':1 'rsync':4 'zfs':2"
-bodyTs: "'-45':79 '10':278 '100':97 '12tb':46,63 '18tb':31 '30':78 '3tb':198 '4':90 '4x':30 'access':113 'al':320 'appear':112,120 'attach':48,275 'attempt':56 'bad':268 'better':250 'built':22 'c':152 'card':281 'copi':65,86,196,243 'could':264,270,282 'ctrl':151 'current':85,91 'd':260 'dabbl':296 'defect':285 'differ':51,263 'diminish':88 'disappear':211 'dmesg':318 'doesn':110 'done':98 'drive':32,64,266 'drop':231 'edit':132 'et':319 'eventu':99 'extern':44 'fair':183 'far':306 'fault':123 'fellow':2 'file':92 'fine':121 'first':77 'free':186 'get':61,145 'go':73 'great':74 'hand':185 'hands-fre':184 'hang':162 'harddriv':47 'help':7,293 'hey':1 'hope':18 'imagin':258 'initi':238 'intricaci':316 'isn':12 'know':314 'like':212 'linux':298 'long':193,220,301 'long-term':219 'longer':115 'm':17,228,305 'machin':52,106 'make':181,214,261 'minut':80 'much':262 'nas':25,241 'need':5 'nervous':216 'network':28 'new':24,69 'note':134 'old':246 'output':165 'partial':172 'point':83,175 'pool':37,70,119 'port':279 'previous':39 'process':137,194 'profici':308 'progress':94 'pull':254 'push':252 'raidz1':36 're':274 'readi':229 'realli':313 'reboot':104,117 'recent':21 'respect':148 'resum':127 'rsync':59,128,136,178,235 'run':159 'sata':280 'search':11 'seem':138,169 'selfhost':3 'separ':157 'server':247 'sigint':149 'sit':95 'speed':87 'ssh':155 'stall':140 'status':161 'success':173 'tell':272,289 'term':221 'thank':290 'thing':72 'think':9,206,226 'time':302 'unraid':234 'usb':45 'use':42,58,177 've':54,101,295 'viabil':222 'week':192 'workaround':168 'would':287 'yield':14 'zfs':35 'zpool':109,160,208"
+cross: false
+upVotes: 0
+downVotes: 0
+ranking: 1703862554
+visibility: "visible "
+apId: "https://lemmy.world/post/10061829"
+editedAt: DateTimeImmutable @1708796667 {#1850
date: 2024-02-24 18:44:27.0 +01:00
}
+createdAt: DateTimeImmutable @1703776154 {#2420
date: 2023-12-28 16:09:14.0 +01:00
}
}
+magazine: App\Entity\Magazine {#266}
+image: null
+parent: App\Entity\EntryComment {#4654
+user: Proxies\__CG__\App\Entity\User {#4655 …}
+entry: App\Entity\Entry {#2412}
+magazine: App\Entity\Magazine {#266}
+image: null
+parent: App\Entity\EntryComment {#4571
+user: Proxies\__CG__\App\Entity\User {#1970 …}
+entry: App\Entity\Entry {#2412}
+magazine: App\Entity\Magazine {#266}
+image: null
+parent: App\Entity\EntryComment {#4456
+user: App\Entity\User {#4469 …}
+entry: App\Entity\Entry {#2412}
+magazine: App\Entity\Magazine {#266}
+image: null
+parent: null
+root: null
+body: """
I don’t have practical experience with ZFS, but my understanding is that it uses RAM a lot… if that’s new, it might be worth checking the RAM by booting up memtest (for example) and just ruling that out.\n
\n
Maybe also worth watching the system with `nmon` or `htop` (running in another `tmux` / `screen` pane) at the beginning of the next session, then when you think it’s jammed up, see what looks different…
"""
+lang: "en"
+isAdult: false
+favouriteCount: 1
+score: 0
+lastActive: DateTime @1711282496 {#4451
date: 2024-03-24 13:14:56.0 +01:00
}
+ip: null
+tags: null
+mentions: [
"@isles@lemmy.world"
]
+children: Doctrine\ORM\PersistentCollection {#4457 …}
+nested: Doctrine\ORM\PersistentCollection {#4459 …}
+votes: Doctrine\ORM\PersistentCollection {#4461 …}
+reports: Doctrine\ORM\PersistentCollection {#4463 …}
+favourites: Doctrine\ORM\PersistentCollection {#4465 …}
+notifications: Doctrine\ORM\PersistentCollection {#4467 …}
-id: 260307
-bodyTs: "'also':42 'anoth':53 'begin':59 'boot':31 'check':27 'differ':75 'exampl':35 'experi':6 'htop':50 'jam':70 'look':74 'lot':18 'mayb':41 'memtest':33 'might':24 'new':22 'next':62 'nmon':48 'pane':56 'practic':5 'ram':16,29 'rule':38 'run':51 'screen':55 'see':72 'session':63 'system':46 'think':67 'tmux':54 'understand':11 'use':15 'watch':44 'worth':26,43 'zfs':8"
+ranking: 0
+commentCount: 0
+upVotes: 0
+downVotes: 0
+visibility: "visible "
+apId: "https://feddit.uk/comment/5443383"
+editedAt: null
+createdAt: DateTimeImmutable @1703794455 {#4452
date: 2023-12-28 21:14:15.0 +01:00
}
}
+root: App\Entity\EntryComment {#4456}
+body: """
Awesome, thanks for giving some clues. It’s a new build, but I didn’t focus hugely on RAM, I think it’s only 32GB. I’ll try this out.\n
\n
Edit: I did some reading about L2ARC, so pending some of these tests, I’m planning to get up to 64gb ram and then extend with an l2arc SSD, assuming no other hardware errors.
"""
+lang: "en"
+isAdult: false
+favouriteCount: 0
+score: 0
+lastActive: DateTime @1703794922 {#4568
date: 2023-12-28 21:22:02.0 +01:00
}
+ip: null
+tags: null
+mentions: [
"@isles@lemmy.world"
"@Cyber@feddit.uk"
"@isles@lemmy.world"
"@Cyber@feddit.uk"
]
+children: Doctrine\ORM\PersistentCollection {#4572 …}
+nested: Doctrine\ORM\PersistentCollection {#4574 …}
+votes: Doctrine\ORM\PersistentCollection {#4576 …}
+reports: Doctrine\ORM\PersistentCollection {#4578 …}
+favourites: Doctrine\ORM\PersistentCollection {#4580 …}
+notifications: Doctrine\ORM\PersistentCollection {#4582 …}
-id: 260330
-bodyTs: "'32gb':25 '64gb':51 'assum':60 'awesom':1 'build':11 'clue':6 'didn':14 'edit':31 'error':64 'extend':55 'focus':16 'get':48 'give':4 'hardwar':63 'huge':17 'l2arc':37,58 'll':27 'm':45 'new':10 'pend':39 'plan':46 'ram':19,52 'read':35 'ssd':59 'test':43 'thank':2 'think':21 'tri':28"
+ranking: 0
+commentCount: 0
+upVotes: 0
+downVotes: 0
+visibility: "visible "
+apId: "https://lemmy.world/comment/6281743"
+editedAt: DateTimeImmutable @1708816354 {#4569
date: 2024-02-25 00:12:34.0 +01:00
}
+createdAt: DateTimeImmutable @1703794922 {#4570
date: 2023-12-28 21:22:02.0 +01:00
}
}
+root: App\Entity\EntryComment {#4456}
+body: """
Based on this [thread](https://serverfault.com/questions/569354/freenas-do-i-need-1gb-per-tb-of-usable-storage-or-1gb-of-memory-per-tb-of-phys) it’s the deduplication that requires a lot of RAM.\n
\n
See also: [wiki.freebsd.org/ZFSTuningGuide](https://wiki.freebsd.org/ZFSTuningGuide)\n
\n
Edit: from my understand the pool shouldn’t become inaccessible tho and only get slow. So there might be another issue.\n
\n
Edit2: here’s a guide to check whether your system is limited by zfs’ memory consumption: [github.com/openzfs/zfs/issues/10251](https://github.com/openzfs/zfs/issues/10251)
"""
+lang: "en"
+isAdult: false
+favouriteCount: 2
+score: 0
+lastActive: DateTime @1703800878 {#4651
date: 2023-12-28 23:01:18.0 +01:00
}
+ip: null
+tags: null
+mentions: [
"@isles@lemmy.world"
"@Cyber@feddit.uk"
"@isles@lemmy.world"
"@Cyber@feddit.uk"
"@isles@lemmy.world"
"@Cyber@feddit.uk"
]
+children: Doctrine\ORM\PersistentCollection {#4656 …}
+nested: Doctrine\ORM\PersistentCollection {#4658 …}
+votes: Doctrine\ORM\PersistentCollection {#4660 …}
+reports: Doctrine\ORM\PersistentCollection {#4662 …}
+favourites: Doctrine\ORM\PersistentCollection {#4664 …}
+notifications: Doctrine\ORM\PersistentCollection {#4666 …}
-id: 260559
-bodyTs: "'/openzfs/zfs/issues/10251](https://github.com/openzfs/zfs/issues/10251)':62 '/questions/569354/freenas-do-i-need-1gb-per-tb-of-usable-storage-or-1gb-of-memory-per-tb-of-phys)':7 '/zfstuningguide](https://wiki.freebsd.org/zfstuningguide)':22 'also':19 'anoth':42 'base':1 'becom':31 'check':50 'consumpt':59 'dedupl':11 'edit':23 'edit2':44 'get':36 'github.com':61 'github.com/openzfs/zfs/issues/10251](https://github.com/openzfs/zfs/issues/10251)':60 'guid':48 'inaccess':32 'issu':43 'limit':55 'lot':15 'memori':58 'might':40 'pool':28 'ram':17 'requir':13 'see':18 'serverfault.com':6 'serverfault.com/questions/569354/freenas-do-i-need-1gb-per-tb-of-usable-storage-or-1gb-of-memory-per-tb-of-phys)':5 'shouldn':29 'slow':37 'system':53 'tho':33 'thread':4 'understand':26 'whether':51 'wiki.freebsd.org':21 'wiki.freebsd.org/zfstuningguide](https://wiki.freebsd.org/zfstuningguide)':20 'zfs':57"
+ranking: 0
+commentCount: 0
+upVotes: 0
+downVotes: 0
+visibility: "visible "
+apId: "https://sh.itjust.works/comment/6902309"
+editedAt: DateTimeImmutable @1708835372 {#4652
date: 2024-02-25 05:29:32.0 +01:00
}
+createdAt: DateTimeImmutable @1703800878 {#4653
date: 2023-12-28 23:01:18.0 +01:00
}
}
+root: App\Entity\EntryComment {#4456}
+body: "I believe there’s another issue. ZFS has been using nearly all RAM (which is fine, I only need RAM for system and ZFS anyway, there’s nothing else running on this box), but I was pretty convinced while I was looking that I don’t have dedup turned on. Thanks for your suggestions and links!"
+lang: "en"
+isAdult: false
+favouriteCount: 2
+score: 0
+lastActive: DateTime @1704301495 {#4690
date: 2024-01-03 18:04:55.0 +01:00
}
+ip: null
+tags: null
+mentions: [
"@isles@lemmy.world"
"@Cyber@feddit.uk"
"@sonstwas@sh.itjust.works"
]
+children: Doctrine\ORM\PersistentCollection {#4693 …}
+nested: Doctrine\ORM\PersistentCollection {#4695 …}
+votes: Doctrine\ORM\PersistentCollection {#4697 …}
+reports: Doctrine\ORM\PersistentCollection {#4699 …}
+favourites: Doctrine\ORM\PersistentCollection {#4701 …}
+notifications: Doctrine\ORM\PersistentCollection {#4703 …}
-id: 276944
-bodyTs: "'anoth':5 'anyway':25 'believ':2 'box':33 'convinc':38 'dedup':48 'els':29 'fine':16 'issu':6 'link':56 'look':42 'near':11 'need':19 'noth':28 'pretti':37 'ram':13,20 'run':30 'suggest':54 'system':22 'thank':51 'turn':49 'use':10 'zfs':7,24"
+ranking: 0
+commentCount: 0
+upVotes: 0
+downVotes: 0
+visibility: "visible "
+apId: "https://lemmy.world/comment/6383677"
+editedAt: null
+createdAt: DateTimeImmutable @1704301495 {#4691
date: 2024-01-03 18:04:55.0 +01:00
}
} |
|
Show voter details
|
73 |
DENIED
|
edit
|
App\Entity\Magazine {#266
+icon: Proxies\__CG__\App\Entity\Image {#247 …}
+name: "selfhosted@lemmy.world"
+title: "selfhosted"
+description: """
A place to share alternatives to popular online services that can be self-hosted without giving up privacy or locking you into a service you don’t control.\n
\n
Rules:\n
\n
- Be civil: we’re here to support and learn from one another. Insults won’t be tolerated. Flame wars are frowned upon.\n
- No spam posting.\n
- Don’t duplicate the full text of your blog or github here. Just post the link for folks to click.\n
- Submission headline should match the article title (don’t cherry-pick information from the title to fit your agenda).\n
- No trolling.\n
\n
Resources:\n
\n
- [awesome-selfhosted software](https://github.com/awesome-selfhosted/awesome-selfhosted)\n
- [awesome-sysadmin](https://github.com/awesome-foss/awesome-sysadmin) resources\n
- [Self-Hosted Podcast from Jupiter Broadcasting](https://selfhosted.show)\n
\n
> Any issues on the community? Report it using the report flag.\n
\n
> Questions? DM the mods!
"""
+rules: null
+subscriptionsCount: 1
+entryCount: 222
+entryCommentCount: 3916
+postCount: 0
+postCommentCount: 0
+isAdult: false
+customCss: null
+lastActive: DateTime @1729582735 {#276
date: 2024-10-22 09:38:55.0 +02:00
}
+markedForDeletionAt: null
+tags: null
+moderators: Doctrine\ORM\PersistentCollection {#238 …}
+ownershipRequests: Doctrine\ORM\PersistentCollection {#234 …}
+moderatorRequests: Doctrine\ORM\PersistentCollection {#223 …}
+entries: Doctrine\ORM\PersistentCollection {#181 …}
+posts: Doctrine\ORM\PersistentCollection {#139 …}
+subscriptions: Doctrine\ORM\PersistentCollection {#201 …}
+bans: Doctrine\ORM\PersistentCollection {#118 …}
+reports: Doctrine\ORM\PersistentCollection {#104 …}
+badges: Doctrine\ORM\PersistentCollection {#82 …}
+logs: Doctrine\ORM\PersistentCollection {#72 …}
+awards: Doctrine\ORM\PersistentCollection {#61 …}
+categories: Doctrine\ORM\PersistentCollection {#1820 …}
-id: 120
+apId: "selfhosted@lemmy.world"
+apProfileId: "https://lemmy.world/c/selfhosted"
+apPublicUrl: "https://lemmy.world/c/selfhosted"
+apFollowersUrl: "https://lemmy.world/c/selfhosted/followers"
+apInboxUrl: "https://lemmy.world/inbox"
+apDomain: "lemmy.world"
+apPreferredUsername: "selfhosted"
+apDiscoverable: true
+apManuallyApprovesFollowers: null
+privateKey: null
+publicKey: null
+apFetchedAt: DateTime @1703473826 {#270
date: 2023-12-25 04:10:26.0 +01:00
}
+apDeletedAt: null
+apTimeoutAt: null
+visibility: "visible "
+createdAt: DateTimeImmutable @1703473826 {#272
date: 2023-12-25 04:10:26.0 +01:00
}
} |
|
Show voter details
|