Query Metrics
1
Database Queries
1
Different statements
75.59 ms
Query time
0
Invalid entities
0
Cache hits
0
Cache misses
0
Cache puts
Grouped Statements
Time▼ | Count | Info |
---|---|---|
75.59 ms (100.00%) |
1 |
INSERT INTO messenger_messages (body, headers, queue_name, created_at, available_at) VALUES(?, ?, ?, ?, ?)
Parameters:
[ "{"payload":"{\"@context\":[\"https:\/\/join-lemmy.org\/context.json\",\"https:\/\/www.w3.org\/ns\/activitystreams\"],\"actor\":\"https:\/\/lemmy.ml\/c\/datahoarder\",\"to\":[\"https:\/\/www.w3.org\/ns\/activitystreams#Public\"],\"object\":{\"id\":\"https:\/\/sh.itjust.works\/post\/35471133\",\"actor\":\"https:\/\/sh.itjust.works\/u\/beastlykings\",\"type\":\"Page\",\"attributedTo\":\"https:\/\/sh.itjust.works\/u\/beastlykings\",\"to\":[\"https:\/\/lemmy.ml\/c\/datahoarder\",\"https:\/\/www.w3.org\/ns\/activitystreams#Public\"],\"name\":\"Recommendations for an inexpensive DIY backup?\",\"cc\":[],\"content\":\"<p>Hi there, I\u2019ve been meaning to go get more serious about my data. I have minimal backups, and some stuff is not backed up at all. I\u2019m begging for disaster.<\/p>\\n<p>Here\u2019s what I\u2019ve got:\\n2 8tb drives almost full in universal external enclosures\\nA small formfactor PC as a server, with one 8tb drive connected.\\nAn unused raspberry pi.\\nNo knowledge of how to properly use zfs.<\/p>\\n<p>Here\u2019s what I want:\\nI\u2019ve decided I don\u2019t need raid. I don\u2019t want the extra cost of drives or electricity, and I don\u2019t need uptime. I just need backups.\\nI want to use what drives I have, and an additional 16tb drive I\u2019ll buy.<\/p>\\n<p>My thought was that I would replace the 8tb drive with a 16tb one, format it with zfs (primarily to avoid bit rot. I\u2019ll need to learn how to check for this), then back it up across the two 8tb drives as a cold backup. Either as two separate drives somehow? Btrfs volume extension? Or a jbod connected to the raspberry pi, that I leave unplugged except for when it\u2019s time to sync the new data?<\/p>\\n<p>Or do you have a similarly cheap solution that\u2019s less janky?<\/p>\\n<p>I just want to back up my data, with an amount of rot protection, cheaply.<\/p>\\n<p>I understand that it might make sense to invest in something a bit more robust right now, and fill it with drives as needed.<\/p>\\n<p>But the thing I keep coming to is the cold backup. How can you keep cold backups over several hard drives, without an entire second server to do the work?<\/p>\\n<p>Thanks for listening to my rambling.<\/p>\\n\",\"mediaType\":\"text\/html\",\"source\":{\"content\":\"Hi there, I've been meaning to go get more serious about my data. I have minimal backups, and some stuff is not backed up at all. I'm begging for disaster.\\n\\n\\nHere's what I've got: \\n2 8tb drives almost full in universal external enclosures\\nA small formfactor PC as a server, with one 8tb drive connected.\\nAn unused raspberry pi.\\nNo knowledge of how to properly use zfs.\\n\\n\\nHere's what I want: \\nI've decided I don't need raid. I don't want the extra cost of drives or electricity, and I don't need uptime. I just need backups.\\nI want to use what drives I have, and an additional 16tb drive I'll buy.\\n\\n\\nMy thought was that I would replace the 8tb drive with a 16tb one, format it with zfs (primarily to avoid bit rot. I'll need to learn how to check for this), then back it up across the two 8tb drives as a cold backup. Either as two separate drives somehow? Btrfs volume extension? Or a jbod connected to the raspberry pi, that I leave unplugged except for when it's time to sync the new data?\\n\\n\\nOr do you have a similarly cheap solution that's less janky?\\n\\n\\nI just want to back up my data, with an amount of rot protection, cheaply. \\n\\nI understand that it might make sense to invest in something a bit more robust right now, and fill it with drives as needed. \\n\\nBut the thing I keep coming to is the cold backup. How can you keep cold backups over several hard drives, without an entire second server to do the work? \\n\\n\\nThanks for listening to my rambling.\\n\\n\",\"mediaType\":\"text\/markdown\"},\"attachment\":[],\"sensitive\":false,\"published\":\"2025-04-03T02:26:07.275277Z\",\"audience\":\"https:\/\/lemmy.ml\/c\/datahoarder\",\"tag\":[{\"href\":\"https:\/\/sh.itjust.works\/post\/35471133\",\"name\":\"#datahoarder\",\"type\":\"Hashtag\"}]},\"cc\":[\"https:\/\/lemmy.ml\/c\/datahoarder\/followers\"],\"type\":\"Announce\",\"id\":\"https:\/\/lemmy.ml\/activities\/announce\/page\/a750a09e-a259-46c2-a8c8-00ddca145f59\"}","request":{"host":"kbin.spritesserver.nl","method":"POST","uri":"\/f\/inbox","client_ip":"54.36.178.108"},"headers":{"content-type":["application\/activity+json"],"host":["kbin.spritesserver.nl"],"date":["Thu, 03 Apr 2025 02:26:17 GMT"],"digest":["SHA-256=qhW6RysicYmeE6Yns\/4B7BXE8+Z+P+QEiOCXj9E2gzo="],"signature":["keyId=\"https:\/\/lemmy.ml\/c\/datahoarder#main-key\",algorithm=\"hs2019\",headers=\"(request-target) content-type date digest host\",signature=\"G8FZqyJ9pG2CBYHbS\/ORV7Ywuwm02zxrPfEdkkW9ByJXFLuxuSY\/RbNizBUbFXESXBQjOtqTP8l48FojS3PrzDm9wNeKg4y9NLP0cNfiBNY4oE4CD2htP9jlqUaAXML9er++3Xf2fllh5muz5aEaPFFhjFjG9+7u8FNoN7iwtcA7pK8JJzpp7HbcuGveEmBah0kLhKo4xWF9H5\/8kqZ2cvBJy38IC8eYNtJOAdjtLu1trUsrUOAfYryvyn81cYHg7uVhwtAJiuPhMe90A8nPlIaqiSE4bwvoHReOpsZGuEv9D+lAcdm+uQEZKyxhxxI3sY6Jm4HYWMUBx\/NNW9l9\/A==\""],"accept":["*\/*"],"user-agent":["Lemmy\/0.19.10; +https:\/\/lemmy.ml"],"accept-encoding":["gzip"],"content-length":["4000"],"x-php-ob-level":["1"]}}" "{"type":"App\\Message\\ActivityPub\\Inbox\\ActivityMessage","X-Message-Stamp-Symfony\\Component\\Messenger\\Stamp\\BusNameStamp":"[{\"busName\":\"messenger.bus.default\"}]","Content-Type":"application\/json"}" "default" "2025-04-03 02:26:18" "2025-04-03 02:26:18" ] |
Database Connections
Name | Service |
---|---|
default | doctrine.dbal.default_connection |
Entity Managers
Name | Service |
---|---|
default | doctrine.orm.default_entity_manager |
Second Level Cache
0
Hits
0
Misses
0
Puts
Entities Mapping
No loaded entities.