1 |
DENIED
|
ROLE_USER
|
null |
|
Show voter details
|
2 |
DENIED
|
moderate
|
App\Entity\Entry {#1796
+user: Proxies\__CG__\App\Entity\User {#1395 …}
+magazine: App\Entity\Magazine {#264
+icon: null
+name: "datahoarder@lemmy.ml"
+title: "datahoarder"
+description: """
**Who are we?**\n
\n
We are digital librarians. Among us are represented the various reasons to keep data – legal requirements, competitive requirements, uncertainty of permanence of cloud services, distaste for transmitting your data externally (e.g. government or corporate espionage), cultural and familial archivists, internet collapse preppers, and people who do it themselves so they’re sure it’s done right. Everyone has their reasons for curating the data they have decided to keep (either forever or For A Damn Long Time). Along the way we have sought out like-minded individuals to exchange strategies, war stories, and cautionary tales of failures.\n
\n
We are one. We are legion. And we’re trying really hard not to forget.\n
\n
– 5-4-3-2-1-bang from [this thread](https://web.archive.org/web/20221111153119/https://old.reddit.com/r/DataHoarder/comments/41tqt4/hi_guys_can_i_kindly_ask_for_an_eli5_of_this/cz53pi0/)
"""
+rules: null
+subscriptionsCount: 1
+entryCount: 58
+entryCommentCount: 311
+postCount: 1
+postCommentCount: 1
+isAdult: false
+customCss: null
+lastActive: DateTime @1729502222 {#274
date: 2024-10-21 11:17:02.0 +02:00
}
+markedForDeletionAt: null
+tags: null
+moderators: Doctrine\ORM\PersistentCollection {#252 …}
+ownershipRequests: Doctrine\ORM\PersistentCollection {#248 …}
+moderatorRequests: Doctrine\ORM\PersistentCollection {#237 …}
+entries: Doctrine\ORM\PersistentCollection {#195 …}
+posts: Doctrine\ORM\PersistentCollection {#153 …}
+subscriptions: Doctrine\ORM\PersistentCollection {#215 …}
+bans: Doctrine\ORM\PersistentCollection {#132 …}
+reports: Doctrine\ORM\PersistentCollection {#118 …}
+badges: Doctrine\ORM\PersistentCollection {#96 …}
+logs: Doctrine\ORM\PersistentCollection {#86 …}
+awards: Doctrine\ORM\PersistentCollection {#75 …}
+categories: Doctrine\ORM\PersistentCollection {#62 …}
-id: 32
+apId: "datahoarder@lemmy.ml"
+apProfileId: "https://lemmy.ml/c/datahoarder"
+apPublicUrl: "https://lemmy.ml/c/datahoarder"
+apFollowersUrl: "https://lemmy.ml/c/datahoarder/followers"
+apInboxUrl: "https://lemmy.ml/inbox"
+apDomain: "lemmy.ml"
+apPreferredUsername: "datahoarder"
+apDiscoverable: true
+apManuallyApprovesFollowers: null
+privateKey: null
+publicKey: null
+apFetchedAt: DateTime @1729303437 {#268
date: 2024-10-19 04:03:57.0 +02:00
}
+apDeletedAt: null
+apTimeoutAt: null
+visibility: "visible "
+createdAt: DateTimeImmutable @1695727192 {#270
date: 2023-09-26 13:19:52.0 +02:00
}
}
+image: Proxies\__CG__\App\Entity\Image {#1876 …}
+domain: Proxies\__CG__\App\Entity\Domain {#1976 …}
+slug: "scraped-media-links-from-instagram-and-threads"
+title: "scraped media links from instagram and threads"
+url: "https://gist.github.com/Ghodawalaaman/f331d95550f64afac67a6b2a68903bf7"
+body: """
I have scraped a lot of links from instagram and threads using selenium python. It was a good learning experience. I will be running that script for few days more and will see how many more media links I can scrape from instagram and threads.\n
\n
However, the problem is that the media isn’t tagged so we don’t know what type of media it is. I wonder if there is an AI or something that can categorize this random media links to an organized list.\n
\n
if you want to download all the media from the links you can run the following command:\n
\n
```\n
\n
<span style="font-style:italic;color:#969896;"># This command will download file with all the links\n
</span><span style="color:#323232;">wget -O links.txt https://gist.githubusercontent.com/Ghodawalaaman/f331d95550f64afac67a6b2a68903bf7/raw/7cc4cc57cdf5ab8aef6471c9407585315ca9d628/gistfile1.txt\n
</span><span style="font-style:italic;color:#969896;"># This command will actually download the media from the links file we got from the above command \n
</span><span style="color:#323232;">wget -i links1.txt\n
</span>\n
```\n
\n
I was thinking about storing all of these. there is two ways of storing these. the first one is to just store the links.txt file and download the content when needed or we can download the content from the links save it to a hard drive. the second method will consume more space, so the first method is good imo.\n
\n
I hope it was something you like :)
"""
+type: "link"
+lang: "en"
+isOc: false
+hasEmbed: true
+commentCount: 1
+favouriteCount: 7
+score: 0
+isAdult: false
+sticky: false
+lastActive: DateTime @1728969781 {#1817
date: 2024-10-15 07:23:01.0 +02:00
}
+ip: null
+adaAmount: 0
+tags: null
+mentions: null
+comments: Doctrine\ORM\PersistentCollection {#1939 …}
+votes: Doctrine\ORM\PersistentCollection {#1932 …}
+reports: Doctrine\ORM\PersistentCollection {#1978 …}
+favourites: Doctrine\ORM\PersistentCollection {#1368 …}
+notifications: Doctrine\ORM\PersistentCollection {#2426 …}
+badges: Doctrine\ORM\PersistentCollection {#2439 …}
+children: []
-id: 33175
-titleTs: "'instagram':5 'link':3 'media':2 'scrape':1 'thread':7"
-bodyTs: "'/ghodawalaaman/f331d95550f64afac67a6b2a68903bf7/raw/7cc4cc57cdf5ab8aef6471c9407585315ca9d628/gistfile1.txt':118 'actual':122 'ai':73 'categor':78 'command':103,105,120,135 'consum':189 'content':167,175 'day':29 'download':91,107,123,165,173 'drive':184 'experi':20 'file':108,129,163 'first':155,194 'follow':102 'gist.githubusercontent.com':117 'gist.githubusercontent.com/ghodawalaaman/f331d95550f64afac67a6b2a68903bf7/raw/7cc4cc57cdf5ab8aef6471c9407585315ca9d628/gistfile1.txt':116 'good':18,197 'got':131 'hard':183 'hope':200 'howev':46 'imo':198 'instagram':9,43 'isn':53 'know':60 'learn':19 'like':205 'link':7,38,82,97,112,128,178 'links.txt':115,162 'links1.txt':138 'list':86 'lot':5 'mani':35 'media':37,52,64,81,94,125 'method':187,195 'need':169 'o':114 'one':156 'organ':85 'problem':48 'python':14 'random':80 'run':24,100 'save':179 'scrape':3,41 'script':26 'second':186 'see':33 'selenium':13 'someth':75,203 'space':191 'store':143,152,160 'tag':55 'think':141 'thread':11,45 'two':149 'type':62 'use':12 'want':89 'way':150 'wget':113,136 'wonder':68"
+cross: false
+upVotes: 0
+downVotes: 0
+ranking: 1706271759
+visibility: "visible "
+apId: "https://lemmy.ca/post/14074247"
+editedAt: null
+createdAt: DateTimeImmutable @1706251759 {#1854
date: 2024-01-26 07:49:19.0 +01:00
}
} |
|
Show voter details
|
3 |
DENIED
|
edit
|
App\Entity\Entry {#1796
+user: Proxies\__CG__\App\Entity\User {#1395 …}
+magazine: App\Entity\Magazine {#264
+icon: null
+name: "datahoarder@lemmy.ml"
+title: "datahoarder"
+description: """
**Who are we?**\n
\n
We are digital librarians. Among us are represented the various reasons to keep data – legal requirements, competitive requirements, uncertainty of permanence of cloud services, distaste for transmitting your data externally (e.g. government or corporate espionage), cultural and familial archivists, internet collapse preppers, and people who do it themselves so they’re sure it’s done right. Everyone has their reasons for curating the data they have decided to keep (either forever or For A Damn Long Time). Along the way we have sought out like-minded individuals to exchange strategies, war stories, and cautionary tales of failures.\n
\n
We are one. We are legion. And we’re trying really hard not to forget.\n
\n
– 5-4-3-2-1-bang from [this thread](https://web.archive.org/web/20221111153119/https://old.reddit.com/r/DataHoarder/comments/41tqt4/hi_guys_can_i_kindly_ask_for_an_eli5_of_this/cz53pi0/)
"""
+rules: null
+subscriptionsCount: 1
+entryCount: 58
+entryCommentCount: 311
+postCount: 1
+postCommentCount: 1
+isAdult: false
+customCss: null
+lastActive: DateTime @1729502222 {#274
date: 2024-10-21 11:17:02.0 +02:00
}
+markedForDeletionAt: null
+tags: null
+moderators: Doctrine\ORM\PersistentCollection {#252 …}
+ownershipRequests: Doctrine\ORM\PersistentCollection {#248 …}
+moderatorRequests: Doctrine\ORM\PersistentCollection {#237 …}
+entries: Doctrine\ORM\PersistentCollection {#195 …}
+posts: Doctrine\ORM\PersistentCollection {#153 …}
+subscriptions: Doctrine\ORM\PersistentCollection {#215 …}
+bans: Doctrine\ORM\PersistentCollection {#132 …}
+reports: Doctrine\ORM\PersistentCollection {#118 …}
+badges: Doctrine\ORM\PersistentCollection {#96 …}
+logs: Doctrine\ORM\PersistentCollection {#86 …}
+awards: Doctrine\ORM\PersistentCollection {#75 …}
+categories: Doctrine\ORM\PersistentCollection {#62 …}
-id: 32
+apId: "datahoarder@lemmy.ml"
+apProfileId: "https://lemmy.ml/c/datahoarder"
+apPublicUrl: "https://lemmy.ml/c/datahoarder"
+apFollowersUrl: "https://lemmy.ml/c/datahoarder/followers"
+apInboxUrl: "https://lemmy.ml/inbox"
+apDomain: "lemmy.ml"
+apPreferredUsername: "datahoarder"
+apDiscoverable: true
+apManuallyApprovesFollowers: null
+privateKey: null
+publicKey: null
+apFetchedAt: DateTime @1729303437 {#268
date: 2024-10-19 04:03:57.0 +02:00
}
+apDeletedAt: null
+apTimeoutAt: null
+visibility: "visible "
+createdAt: DateTimeImmutable @1695727192 {#270
date: 2023-09-26 13:19:52.0 +02:00
}
}
+image: Proxies\__CG__\App\Entity\Image {#1876 …}
+domain: Proxies\__CG__\App\Entity\Domain {#1976 …}
+slug: "scraped-media-links-from-instagram-and-threads"
+title: "scraped media links from instagram and threads"
+url: "https://gist.github.com/Ghodawalaaman/f331d95550f64afac67a6b2a68903bf7"
+body: """
I have scraped a lot of links from instagram and threads using selenium python. It was a good learning experience. I will be running that script for few days more and will see how many more media links I can scrape from instagram and threads.\n
\n
However, the problem is that the media isn’t tagged so we don’t know what type of media it is. I wonder if there is an AI or something that can categorize this random media links to an organized list.\n
\n
if you want to download all the media from the links you can run the following command:\n
\n
```\n
\n
<span style="font-style:italic;color:#969896;"># This command will download file with all the links\n
</span><span style="color:#323232;">wget -O links.txt https://gist.githubusercontent.com/Ghodawalaaman/f331d95550f64afac67a6b2a68903bf7/raw/7cc4cc57cdf5ab8aef6471c9407585315ca9d628/gistfile1.txt\n
</span><span style="font-style:italic;color:#969896;"># This command will actually download the media from the links file we got from the above command \n
</span><span style="color:#323232;">wget -i links1.txt\n
</span>\n
```\n
\n
I was thinking about storing all of these. there is two ways of storing these. the first one is to just store the links.txt file and download the content when needed or we can download the content from the links save it to a hard drive. the second method will consume more space, so the first method is good imo.\n
\n
I hope it was something you like :)
"""
+type: "link"
+lang: "en"
+isOc: false
+hasEmbed: true
+commentCount: 1
+favouriteCount: 7
+score: 0
+isAdult: false
+sticky: false
+lastActive: DateTime @1728969781 {#1817
date: 2024-10-15 07:23:01.0 +02:00
}
+ip: null
+adaAmount: 0
+tags: null
+mentions: null
+comments: Doctrine\ORM\PersistentCollection {#1939 …}
+votes: Doctrine\ORM\PersistentCollection {#1932 …}
+reports: Doctrine\ORM\PersistentCollection {#1978 …}
+favourites: Doctrine\ORM\PersistentCollection {#1368 …}
+notifications: Doctrine\ORM\PersistentCollection {#2426 …}
+badges: Doctrine\ORM\PersistentCollection {#2439 …}
+children: []
-id: 33175
-titleTs: "'instagram':5 'link':3 'media':2 'scrape':1 'thread':7"
-bodyTs: "'/ghodawalaaman/f331d95550f64afac67a6b2a68903bf7/raw/7cc4cc57cdf5ab8aef6471c9407585315ca9d628/gistfile1.txt':118 'actual':122 'ai':73 'categor':78 'command':103,105,120,135 'consum':189 'content':167,175 'day':29 'download':91,107,123,165,173 'drive':184 'experi':20 'file':108,129,163 'first':155,194 'follow':102 'gist.githubusercontent.com':117 'gist.githubusercontent.com/ghodawalaaman/f331d95550f64afac67a6b2a68903bf7/raw/7cc4cc57cdf5ab8aef6471c9407585315ca9d628/gistfile1.txt':116 'good':18,197 'got':131 'hard':183 'hope':200 'howev':46 'imo':198 'instagram':9,43 'isn':53 'know':60 'learn':19 'like':205 'link':7,38,82,97,112,128,178 'links.txt':115,162 'links1.txt':138 'list':86 'lot':5 'mani':35 'media':37,52,64,81,94,125 'method':187,195 'need':169 'o':114 'one':156 'organ':85 'problem':48 'python':14 'random':80 'run':24,100 'save':179 'scrape':3,41 'script':26 'second':186 'see':33 'selenium':13 'someth':75,203 'space':191 'store':143,152,160 'tag':55 'think':141 'thread':11,45 'two':149 'type':62 'use':12 'want':89 'way':150 'wget':113,136 'wonder':68"
+cross: false
+upVotes: 0
+downVotes: 0
+ranking: 1706271759
+visibility: "visible "
+apId: "https://lemmy.ca/post/14074247"
+editedAt: null
+createdAt: DateTimeImmutable @1706251759 {#1854
date: 2024-01-26 07:49:19.0 +01:00
}
} |
|
Show voter details
|
4 |
DENIED
|
moderate
|
App\Entity\Entry {#1796
+user: Proxies\__CG__\App\Entity\User {#1395 …}
+magazine: App\Entity\Magazine {#264
+icon: null
+name: "datahoarder@lemmy.ml"
+title: "datahoarder"
+description: """
**Who are we?**\n
\n
We are digital librarians. Among us are represented the various reasons to keep data – legal requirements, competitive requirements, uncertainty of permanence of cloud services, distaste for transmitting your data externally (e.g. government or corporate espionage), cultural and familial archivists, internet collapse preppers, and people who do it themselves so they’re sure it’s done right. Everyone has their reasons for curating the data they have decided to keep (either forever or For A Damn Long Time). Along the way we have sought out like-minded individuals to exchange strategies, war stories, and cautionary tales of failures.\n
\n
We are one. We are legion. And we’re trying really hard not to forget.\n
\n
– 5-4-3-2-1-bang from [this thread](https://web.archive.org/web/20221111153119/https://old.reddit.com/r/DataHoarder/comments/41tqt4/hi_guys_can_i_kindly_ask_for_an_eli5_of_this/cz53pi0/)
"""
+rules: null
+subscriptionsCount: 1
+entryCount: 58
+entryCommentCount: 311
+postCount: 1
+postCommentCount: 1
+isAdult: false
+customCss: null
+lastActive: DateTime @1729502222 {#274
date: 2024-10-21 11:17:02.0 +02:00
}
+markedForDeletionAt: null
+tags: null
+moderators: Doctrine\ORM\PersistentCollection {#252 …}
+ownershipRequests: Doctrine\ORM\PersistentCollection {#248 …}
+moderatorRequests: Doctrine\ORM\PersistentCollection {#237 …}
+entries: Doctrine\ORM\PersistentCollection {#195 …}
+posts: Doctrine\ORM\PersistentCollection {#153 …}
+subscriptions: Doctrine\ORM\PersistentCollection {#215 …}
+bans: Doctrine\ORM\PersistentCollection {#132 …}
+reports: Doctrine\ORM\PersistentCollection {#118 …}
+badges: Doctrine\ORM\PersistentCollection {#96 …}
+logs: Doctrine\ORM\PersistentCollection {#86 …}
+awards: Doctrine\ORM\PersistentCollection {#75 …}
+categories: Doctrine\ORM\PersistentCollection {#62 …}
-id: 32
+apId: "datahoarder@lemmy.ml"
+apProfileId: "https://lemmy.ml/c/datahoarder"
+apPublicUrl: "https://lemmy.ml/c/datahoarder"
+apFollowersUrl: "https://lemmy.ml/c/datahoarder/followers"
+apInboxUrl: "https://lemmy.ml/inbox"
+apDomain: "lemmy.ml"
+apPreferredUsername: "datahoarder"
+apDiscoverable: true
+apManuallyApprovesFollowers: null
+privateKey: null
+publicKey: null
+apFetchedAt: DateTime @1729303437 {#268
date: 2024-10-19 04:03:57.0 +02:00
}
+apDeletedAt: null
+apTimeoutAt: null
+visibility: "visible "
+createdAt: DateTimeImmutable @1695727192 {#270
date: 2023-09-26 13:19:52.0 +02:00
}
}
+image: Proxies\__CG__\App\Entity\Image {#1876 …}
+domain: Proxies\__CG__\App\Entity\Domain {#1976 …}
+slug: "scraped-media-links-from-instagram-and-threads"
+title: "scraped media links from instagram and threads"
+url: "https://gist.github.com/Ghodawalaaman/f331d95550f64afac67a6b2a68903bf7"
+body: """
I have scraped a lot of links from instagram and threads using selenium python. It was a good learning experience. I will be running that script for few days more and will see how many more media links I can scrape from instagram and threads.\n
\n
However, the problem is that the media isn’t tagged so we don’t know what type of media it is. I wonder if there is an AI or something that can categorize this random media links to an organized list.\n
\n
if you want to download all the media from the links you can run the following command:\n
\n
```\n
\n
<span style="font-style:italic;color:#969896;"># This command will download file with all the links\n
</span><span style="color:#323232;">wget -O links.txt https://gist.githubusercontent.com/Ghodawalaaman/f331d95550f64afac67a6b2a68903bf7/raw/7cc4cc57cdf5ab8aef6471c9407585315ca9d628/gistfile1.txt\n
</span><span style="font-style:italic;color:#969896;"># This command will actually download the media from the links file we got from the above command \n
</span><span style="color:#323232;">wget -i links1.txt\n
</span>\n
```\n
\n
I was thinking about storing all of these. there is two ways of storing these. the first one is to just store the links.txt file and download the content when needed or we can download the content from the links save it to a hard drive. the second method will consume more space, so the first method is good imo.\n
\n
I hope it was something you like :)
"""
+type: "link"
+lang: "en"
+isOc: false
+hasEmbed: true
+commentCount: 1
+favouriteCount: 7
+score: 0
+isAdult: false
+sticky: false
+lastActive: DateTime @1728969781 {#1817
date: 2024-10-15 07:23:01.0 +02:00
}
+ip: null
+adaAmount: 0
+tags: null
+mentions: null
+comments: Doctrine\ORM\PersistentCollection {#1939 …}
+votes: Doctrine\ORM\PersistentCollection {#1932 …}
+reports: Doctrine\ORM\PersistentCollection {#1978 …}
+favourites: Doctrine\ORM\PersistentCollection {#1368 …}
+notifications: Doctrine\ORM\PersistentCollection {#2426 …}
+badges: Doctrine\ORM\PersistentCollection {#2439 …}
+children: []
-id: 33175
-titleTs: "'instagram':5 'link':3 'media':2 'scrape':1 'thread':7"
-bodyTs: "'/ghodawalaaman/f331d95550f64afac67a6b2a68903bf7/raw/7cc4cc57cdf5ab8aef6471c9407585315ca9d628/gistfile1.txt':118 'actual':122 'ai':73 'categor':78 'command':103,105,120,135 'consum':189 'content':167,175 'day':29 'download':91,107,123,165,173 'drive':184 'experi':20 'file':108,129,163 'first':155,194 'follow':102 'gist.githubusercontent.com':117 'gist.githubusercontent.com/ghodawalaaman/f331d95550f64afac67a6b2a68903bf7/raw/7cc4cc57cdf5ab8aef6471c9407585315ca9d628/gistfile1.txt':116 'good':18,197 'got':131 'hard':183 'hope':200 'howev':46 'imo':198 'instagram':9,43 'isn':53 'know':60 'learn':19 'like':205 'link':7,38,82,97,112,128,178 'links.txt':115,162 'links1.txt':138 'list':86 'lot':5 'mani':35 'media':37,52,64,81,94,125 'method':187,195 'need':169 'o':114 'one':156 'organ':85 'problem':48 'python':14 'random':80 'run':24,100 'save':179 'scrape':3,41 'script':26 'second':186 'see':33 'selenium':13 'someth':75,203 'space':191 'store':143,152,160 'tag':55 'think':141 'thread':11,45 'two':149 'type':62 'use':12 'want':89 'way':150 'wget':113,136 'wonder':68"
+cross: false
+upVotes: 0
+downVotes: 0
+ranking: 1706271759
+visibility: "visible "
+apId: "https://lemmy.ca/post/14074247"
+editedAt: null
+createdAt: DateTimeImmutable @1706251759 {#1854
date: 2024-01-26 07:49:19.0 +01:00
}
} |
|
Show voter details
|
5 |
DENIED
|
ROLE_USER
|
null |
|
Show voter details
|
6 |
DENIED
|
moderate
|
App\Entity\EntryComment {#4038
+user: App\Entity\User {#3987 …}
+entry: App\Entity\Entry {#1796
+user: Proxies\__CG__\App\Entity\User {#1395 …}
+magazine: App\Entity\Magazine {#264
+icon: null
+name: "datahoarder@lemmy.ml"
+title: "datahoarder"
+description: """
**Who are we?**\n
\n
We are digital librarians. Among us are represented the various reasons to keep data – legal requirements, competitive requirements, uncertainty of permanence of cloud services, distaste for transmitting your data externally (e.g. government or corporate espionage), cultural and familial archivists, internet collapse preppers, and people who do it themselves so they’re sure it’s done right. Everyone has their reasons for curating the data they have decided to keep (either forever or For A Damn Long Time). Along the way we have sought out like-minded individuals to exchange strategies, war stories, and cautionary tales of failures.\n
\n
We are one. We are legion. And we’re trying really hard not to forget.\n
\n
– 5-4-3-2-1-bang from [this thread](https://web.archive.org/web/20221111153119/https://old.reddit.com/r/DataHoarder/comments/41tqt4/hi_guys_can_i_kindly_ask_for_an_eli5_of_this/cz53pi0/)
"""
+rules: null
+subscriptionsCount: 1
+entryCount: 58
+entryCommentCount: 311
+postCount: 1
+postCommentCount: 1
+isAdult: false
+customCss: null
+lastActive: DateTime @1729502222 {#274
date: 2024-10-21 11:17:02.0 +02:00
}
+markedForDeletionAt: null
+tags: null
+moderators: Doctrine\ORM\PersistentCollection {#252 …}
+ownershipRequests: Doctrine\ORM\PersistentCollection {#248 …}
+moderatorRequests: Doctrine\ORM\PersistentCollection {#237 …}
+entries: Doctrine\ORM\PersistentCollection {#195 …}
+posts: Doctrine\ORM\PersistentCollection {#153 …}
+subscriptions: Doctrine\ORM\PersistentCollection {#215 …}
+bans: Doctrine\ORM\PersistentCollection {#132 …}
+reports: Doctrine\ORM\PersistentCollection {#118 …}
+badges: Doctrine\ORM\PersistentCollection {#96 …}
+logs: Doctrine\ORM\PersistentCollection {#86 …}
+awards: Doctrine\ORM\PersistentCollection {#75 …}
+categories: Doctrine\ORM\PersistentCollection {#62 …}
-id: 32
+apId: "datahoarder@lemmy.ml"
+apProfileId: "https://lemmy.ml/c/datahoarder"
+apPublicUrl: "https://lemmy.ml/c/datahoarder"
+apFollowersUrl: "https://lemmy.ml/c/datahoarder/followers"
+apInboxUrl: "https://lemmy.ml/inbox"
+apDomain: "lemmy.ml"
+apPreferredUsername: "datahoarder"
+apDiscoverable: true
+apManuallyApprovesFollowers: null
+privateKey: null
+publicKey: null
+apFetchedAt: DateTime @1729303437 {#268
date: 2024-10-19 04:03:57.0 +02:00
}
+apDeletedAt: null
+apTimeoutAt: null
+visibility: "visible "
+createdAt: DateTimeImmutable @1695727192 {#270
date: 2023-09-26 13:19:52.0 +02:00
}
}
+image: Proxies\__CG__\App\Entity\Image {#1876 …}
+domain: Proxies\__CG__\App\Entity\Domain {#1976 …}
+slug: "scraped-media-links-from-instagram-and-threads"
+title: "scraped media links from instagram and threads"
+url: "https://gist.github.com/Ghodawalaaman/f331d95550f64afac67a6b2a68903bf7"
+body: """
I have scraped a lot of links from instagram and threads using selenium python. It was a good learning experience. I will be running that script for few days more and will see how many more media links I can scrape from instagram and threads.\n
\n
However, the problem is that the media isn’t tagged so we don’t know what type of media it is. I wonder if there is an AI or something that can categorize this random media links to an organized list.\n
\n
if you want to download all the media from the links you can run the following command:\n
\n
```\n
\n
<span style="font-style:italic;color:#969896;"># This command will download file with all the links\n
</span><span style="color:#323232;">wget -O links.txt https://gist.githubusercontent.com/Ghodawalaaman/f331d95550f64afac67a6b2a68903bf7/raw/7cc4cc57cdf5ab8aef6471c9407585315ca9d628/gistfile1.txt\n
</span><span style="font-style:italic;color:#969896;"># This command will actually download the media from the links file we got from the above command \n
</span><span style="color:#323232;">wget -i links1.txt\n
</span>\n
```\n
\n
I was thinking about storing all of these. there is two ways of storing these. the first one is to just store the links.txt file and download the content when needed or we can download the content from the links save it to a hard drive. the second method will consume more space, so the first method is good imo.\n
\n
I hope it was something you like :)
"""
+type: "link"
+lang: "en"
+isOc: false
+hasEmbed: true
+commentCount: 1
+favouriteCount: 7
+score: 0
+isAdult: false
+sticky: false
+lastActive: DateTime @1728969781 {#1817
date: 2024-10-15 07:23:01.0 +02:00
}
+ip: null
+adaAmount: 0
+tags: null
+mentions: null
+comments: Doctrine\ORM\PersistentCollection {#1939 …}
+votes: Doctrine\ORM\PersistentCollection {#1932 …}
+reports: Doctrine\ORM\PersistentCollection {#1978 …}
+favourites: Doctrine\ORM\PersistentCollection {#1368 …}
+notifications: Doctrine\ORM\PersistentCollection {#2426 …}
+badges: Doctrine\ORM\PersistentCollection {#2439 …}
+children: []
-id: 33175
-titleTs: "'instagram':5 'link':3 'media':2 'scrape':1 'thread':7"
-bodyTs: "'/ghodawalaaman/f331d95550f64afac67a6b2a68903bf7/raw/7cc4cc57cdf5ab8aef6471c9407585315ca9d628/gistfile1.txt':118 'actual':122 'ai':73 'categor':78 'command':103,105,120,135 'consum':189 'content':167,175 'day':29 'download':91,107,123,165,173 'drive':184 'experi':20 'file':108,129,163 'first':155,194 'follow':102 'gist.githubusercontent.com':117 'gist.githubusercontent.com/ghodawalaaman/f331d95550f64afac67a6b2a68903bf7/raw/7cc4cc57cdf5ab8aef6471c9407585315ca9d628/gistfile1.txt':116 'good':18,197 'got':131 'hard':183 'hope':200 'howev':46 'imo':198 'instagram':9,43 'isn':53 'know':60 'learn':19 'like':205 'link':7,38,82,97,112,128,178 'links.txt':115,162 'links1.txt':138 'list':86 'lot':5 'mani':35 'media':37,52,64,81,94,125 'method':187,195 'need':169 'o':114 'one':156 'organ':85 'problem':48 'python':14 'random':80 'run':24,100 'save':179 'scrape':3,41 'script':26 'second':186 'see':33 'selenium':13 'someth':75,203 'space':191 'store':143,152,160 'tag':55 'think':141 'thread':11,45 'two':149 'type':62 'use':12 'want':89 'way':150 'wget':113,136 'wonder':68"
+cross: false
+upVotes: 0
+downVotes: 0
+ranking: 1706271759
+visibility: "visible "
+apId: "https://lemmy.ca/post/14074247"
+editedAt: null
+createdAt: DateTimeImmutable @1706251759 {#1854
date: 2024-01-26 07:49:19.0 +01:00
}
}
+magazine: App\Entity\Magazine {#264}
+image: null
+parent: null
+root: null
+body: """
What is your goal with this?\n
\n
I know, stupid question to a datahoarder. My point is that archiving all of instagram or threads would be impossible even for ArchiveTeam, much less a single person. Are these random posts, or ones you care about?
"""
+lang: "en"
+isAdult: false
+favouriteCount: 0
+score: 0
+lastActive: DateTime @1706368573 {#4047
date: 2024-01-27 16:16:13.0 +01:00
}
+ip: null
+tags: null
+mentions: [
"@kionite231@lemmy.ca"
]
+children: Doctrine\ORM\PersistentCollection {#4036 …}
+nested: Doctrine\ORM\PersistentCollection {#4034 …}
+votes: Doctrine\ORM\PersistentCollection {#4032 …}
+reports: Doctrine\ORM\PersistentCollection {#4030 …}
+favourites: Doctrine\ORM\PersistentCollection {#3998 …}
+notifications: Doctrine\ORM\PersistentCollection {#4002 …}
-id: 344809
-bodyTs: "'archiv':18 'archiveteam':29 'care':42 'datahoard':13 'even':27 'goal':4 'imposs':26 'instagram':21 'know':8 'less':31 'much':30 'one':40 'person':34 'point':15 'post':38 'question':10 'random':37 'singl':33 'stupid':9 'thread':23 'would':24"
+ranking: 0
+commentCount: 0
+upVotes: 0
+downVotes: 0
+visibility: "visible "
+apId: "https://lemm.ee/comment/8716912"
+editedAt: null
+createdAt: DateTimeImmutable @1706368573 {#3979
date: 2024-01-27 16:16:13.0 +01:00
}
} |
|
Show voter details
|
7 |
DENIED
|
edit
|
App\Entity\EntryComment {#4038
+user: App\Entity\User {#3987 …}
+entry: App\Entity\Entry {#1796
+user: Proxies\__CG__\App\Entity\User {#1395 …}
+magazine: App\Entity\Magazine {#264
+icon: null
+name: "datahoarder@lemmy.ml"
+title: "datahoarder"
+description: """
**Who are we?**\n
\n
We are digital librarians. Among us are represented the various reasons to keep data – legal requirements, competitive requirements, uncertainty of permanence of cloud services, distaste for transmitting your data externally (e.g. government or corporate espionage), cultural and familial archivists, internet collapse preppers, and people who do it themselves so they’re sure it’s done right. Everyone has their reasons for curating the data they have decided to keep (either forever or For A Damn Long Time). Along the way we have sought out like-minded individuals to exchange strategies, war stories, and cautionary tales of failures.\n
\n
We are one. We are legion. And we’re trying really hard not to forget.\n
\n
– 5-4-3-2-1-bang from [this thread](https://web.archive.org/web/20221111153119/https://old.reddit.com/r/DataHoarder/comments/41tqt4/hi_guys_can_i_kindly_ask_for_an_eli5_of_this/cz53pi0/)
"""
+rules: null
+subscriptionsCount: 1
+entryCount: 58
+entryCommentCount: 311
+postCount: 1
+postCommentCount: 1
+isAdult: false
+customCss: null
+lastActive: DateTime @1729502222 {#274
date: 2024-10-21 11:17:02.0 +02:00
}
+markedForDeletionAt: null
+tags: null
+moderators: Doctrine\ORM\PersistentCollection {#252 …}
+ownershipRequests: Doctrine\ORM\PersistentCollection {#248 …}
+moderatorRequests: Doctrine\ORM\PersistentCollection {#237 …}
+entries: Doctrine\ORM\PersistentCollection {#195 …}
+posts: Doctrine\ORM\PersistentCollection {#153 …}
+subscriptions: Doctrine\ORM\PersistentCollection {#215 …}
+bans: Doctrine\ORM\PersistentCollection {#132 …}
+reports: Doctrine\ORM\PersistentCollection {#118 …}
+badges: Doctrine\ORM\PersistentCollection {#96 …}
+logs: Doctrine\ORM\PersistentCollection {#86 …}
+awards: Doctrine\ORM\PersistentCollection {#75 …}
+categories: Doctrine\ORM\PersistentCollection {#62 …}
-id: 32
+apId: "datahoarder@lemmy.ml"
+apProfileId: "https://lemmy.ml/c/datahoarder"
+apPublicUrl: "https://lemmy.ml/c/datahoarder"
+apFollowersUrl: "https://lemmy.ml/c/datahoarder/followers"
+apInboxUrl: "https://lemmy.ml/inbox"
+apDomain: "lemmy.ml"
+apPreferredUsername: "datahoarder"
+apDiscoverable: true
+apManuallyApprovesFollowers: null
+privateKey: null
+publicKey: null
+apFetchedAt: DateTime @1729303437 {#268
date: 2024-10-19 04:03:57.0 +02:00
}
+apDeletedAt: null
+apTimeoutAt: null
+visibility: "visible "
+createdAt: DateTimeImmutable @1695727192 {#270
date: 2023-09-26 13:19:52.0 +02:00
}
}
+image: Proxies\__CG__\App\Entity\Image {#1876 …}
+domain: Proxies\__CG__\App\Entity\Domain {#1976 …}
+slug: "scraped-media-links-from-instagram-and-threads"
+title: "scraped media links from instagram and threads"
+url: "https://gist.github.com/Ghodawalaaman/f331d95550f64afac67a6b2a68903bf7"
+body: """
I have scraped a lot of links from instagram and threads using selenium python. It was a good learning experience. I will be running that script for few days more and will see how many more media links I can scrape from instagram and threads.\n
\n
However, the problem is that the media isn’t tagged so we don’t know what type of media it is. I wonder if there is an AI or something that can categorize this random media links to an organized list.\n
\n
if you want to download all the media from the links you can run the following command:\n
\n
```\n
\n
<span style="font-style:italic;color:#969896;"># This command will download file with all the links\n
</span><span style="color:#323232;">wget -O links.txt https://gist.githubusercontent.com/Ghodawalaaman/f331d95550f64afac67a6b2a68903bf7/raw/7cc4cc57cdf5ab8aef6471c9407585315ca9d628/gistfile1.txt\n
</span><span style="font-style:italic;color:#969896;"># This command will actually download the media from the links file we got from the above command \n
</span><span style="color:#323232;">wget -i links1.txt\n
</span>\n
```\n
\n
I was thinking about storing all of these. there is two ways of storing these. the first one is to just store the links.txt file and download the content when needed or we can download the content from the links save it to a hard drive. the second method will consume more space, so the first method is good imo.\n
\n
I hope it was something you like :)
"""
+type: "link"
+lang: "en"
+isOc: false
+hasEmbed: true
+commentCount: 1
+favouriteCount: 7
+score: 0
+isAdult: false
+sticky: false
+lastActive: DateTime @1728969781 {#1817
date: 2024-10-15 07:23:01.0 +02:00
}
+ip: null
+adaAmount: 0
+tags: null
+mentions: null
+comments: Doctrine\ORM\PersistentCollection {#1939 …}
+votes: Doctrine\ORM\PersistentCollection {#1932 …}
+reports: Doctrine\ORM\PersistentCollection {#1978 …}
+favourites: Doctrine\ORM\PersistentCollection {#1368 …}
+notifications: Doctrine\ORM\PersistentCollection {#2426 …}
+badges: Doctrine\ORM\PersistentCollection {#2439 …}
+children: []
-id: 33175
-titleTs: "'instagram':5 'link':3 'media':2 'scrape':1 'thread':7"
-bodyTs: "'/ghodawalaaman/f331d95550f64afac67a6b2a68903bf7/raw/7cc4cc57cdf5ab8aef6471c9407585315ca9d628/gistfile1.txt':118 'actual':122 'ai':73 'categor':78 'command':103,105,120,135 'consum':189 'content':167,175 'day':29 'download':91,107,123,165,173 'drive':184 'experi':20 'file':108,129,163 'first':155,194 'follow':102 'gist.githubusercontent.com':117 'gist.githubusercontent.com/ghodawalaaman/f331d95550f64afac67a6b2a68903bf7/raw/7cc4cc57cdf5ab8aef6471c9407585315ca9d628/gistfile1.txt':116 'good':18,197 'got':131 'hard':183 'hope':200 'howev':46 'imo':198 'instagram':9,43 'isn':53 'know':60 'learn':19 'like':205 'link':7,38,82,97,112,128,178 'links.txt':115,162 'links1.txt':138 'list':86 'lot':5 'mani':35 'media':37,52,64,81,94,125 'method':187,195 'need':169 'o':114 'one':156 'organ':85 'problem':48 'python':14 'random':80 'run':24,100 'save':179 'scrape':3,41 'script':26 'second':186 'see':33 'selenium':13 'someth':75,203 'space':191 'store':143,152,160 'tag':55 'think':141 'thread':11,45 'two':149 'type':62 'use':12 'want':89 'way':150 'wget':113,136 'wonder':68"
+cross: false
+upVotes: 0
+downVotes: 0
+ranking: 1706271759
+visibility: "visible "
+apId: "https://lemmy.ca/post/14074247"
+editedAt: null
+createdAt: DateTimeImmutable @1706251759 {#1854
date: 2024-01-26 07:49:19.0 +01:00
}
}
+magazine: App\Entity\Magazine {#264}
+image: null
+parent: null
+root: null
+body: """
What is your goal with this?\n
\n
I know, stupid question to a datahoarder. My point is that archiving all of instagram or threads would be impossible even for ArchiveTeam, much less a single person. Are these random posts, or ones you care about?
"""
+lang: "en"
+isAdult: false
+favouriteCount: 0
+score: 0
+lastActive: DateTime @1706368573 {#4047
date: 2024-01-27 16:16:13.0 +01:00
}
+ip: null
+tags: null
+mentions: [
"@kionite231@lemmy.ca"
]
+children: Doctrine\ORM\PersistentCollection {#4036 …}
+nested: Doctrine\ORM\PersistentCollection {#4034 …}
+votes: Doctrine\ORM\PersistentCollection {#4032 …}
+reports: Doctrine\ORM\PersistentCollection {#4030 …}
+favourites: Doctrine\ORM\PersistentCollection {#3998 …}
+notifications: Doctrine\ORM\PersistentCollection {#4002 …}
-id: 344809
-bodyTs: "'archiv':18 'archiveteam':29 'care':42 'datahoard':13 'even':27 'goal':4 'imposs':26 'instagram':21 'know':8 'less':31 'much':30 'one':40 'person':34 'point':15 'post':38 'question':10 'random':37 'singl':33 'stupid':9 'thread':23 'would':24"
+ranking: 0
+commentCount: 0
+upVotes: 0
+downVotes: 0
+visibility: "visible "
+apId: "https://lemm.ee/comment/8716912"
+editedAt: null
+createdAt: DateTimeImmutable @1706368573 {#3979
date: 2024-01-27 16:16:13.0 +01:00
}
} |
|
Show voter details
|
8 |
DENIED
|
moderate
|
App\Entity\EntryComment {#4038
+user: App\Entity\User {#3987 …}
+entry: App\Entity\Entry {#1796
+user: Proxies\__CG__\App\Entity\User {#1395 …}
+magazine: App\Entity\Magazine {#264
+icon: null
+name: "datahoarder@lemmy.ml"
+title: "datahoarder"
+description: """
**Who are we?**\n
\n
We are digital librarians. Among us are represented the various reasons to keep data – legal requirements, competitive requirements, uncertainty of permanence of cloud services, distaste for transmitting your data externally (e.g. government or corporate espionage), cultural and familial archivists, internet collapse preppers, and people who do it themselves so they’re sure it’s done right. Everyone has their reasons for curating the data they have decided to keep (either forever or For A Damn Long Time). Along the way we have sought out like-minded individuals to exchange strategies, war stories, and cautionary tales of failures.\n
\n
We are one. We are legion. And we’re trying really hard not to forget.\n
\n
– 5-4-3-2-1-bang from [this thread](https://web.archive.org/web/20221111153119/https://old.reddit.com/r/DataHoarder/comments/41tqt4/hi_guys_can_i_kindly_ask_for_an_eli5_of_this/cz53pi0/)
"""
+rules: null
+subscriptionsCount: 1
+entryCount: 58
+entryCommentCount: 311
+postCount: 1
+postCommentCount: 1
+isAdult: false
+customCss: null
+lastActive: DateTime @1729502222 {#274
date: 2024-10-21 11:17:02.0 +02:00
}
+markedForDeletionAt: null
+tags: null
+moderators: Doctrine\ORM\PersistentCollection {#252 …}
+ownershipRequests: Doctrine\ORM\PersistentCollection {#248 …}
+moderatorRequests: Doctrine\ORM\PersistentCollection {#237 …}
+entries: Doctrine\ORM\PersistentCollection {#195 …}
+posts: Doctrine\ORM\PersistentCollection {#153 …}
+subscriptions: Doctrine\ORM\PersistentCollection {#215 …}
+bans: Doctrine\ORM\PersistentCollection {#132 …}
+reports: Doctrine\ORM\PersistentCollection {#118 …}
+badges: Doctrine\ORM\PersistentCollection {#96 …}
+logs: Doctrine\ORM\PersistentCollection {#86 …}
+awards: Doctrine\ORM\PersistentCollection {#75 …}
+categories: Doctrine\ORM\PersistentCollection {#62 …}
-id: 32
+apId: "datahoarder@lemmy.ml"
+apProfileId: "https://lemmy.ml/c/datahoarder"
+apPublicUrl: "https://lemmy.ml/c/datahoarder"
+apFollowersUrl: "https://lemmy.ml/c/datahoarder/followers"
+apInboxUrl: "https://lemmy.ml/inbox"
+apDomain: "lemmy.ml"
+apPreferredUsername: "datahoarder"
+apDiscoverable: true
+apManuallyApprovesFollowers: null
+privateKey: null
+publicKey: null
+apFetchedAt: DateTime @1729303437 {#268
date: 2024-10-19 04:03:57.0 +02:00
}
+apDeletedAt: null
+apTimeoutAt: null
+visibility: "visible "
+createdAt: DateTimeImmutable @1695727192 {#270
date: 2023-09-26 13:19:52.0 +02:00
}
}
+image: Proxies\__CG__\App\Entity\Image {#1876 …}
+domain: Proxies\__CG__\App\Entity\Domain {#1976 …}
+slug: "scraped-media-links-from-instagram-and-threads"
+title: "scraped media links from instagram and threads"
+url: "https://gist.github.com/Ghodawalaaman/f331d95550f64afac67a6b2a68903bf7"
+body: """
I have scraped a lot of links from instagram and threads using selenium python. It was a good learning experience. I will be running that script for few days more and will see how many more media links I can scrape from instagram and threads.\n
\n
However, the problem is that the media isn’t tagged so we don’t know what type of media it is. I wonder if there is an AI or something that can categorize this random media links to an organized list.\n
\n
if you want to download all the media from the links you can run the following command:\n
\n
```\n
\n
<span style="font-style:italic;color:#969896;"># This command will download file with all the links\n
</span><span style="color:#323232;">wget -O links.txt https://gist.githubusercontent.com/Ghodawalaaman/f331d95550f64afac67a6b2a68903bf7/raw/7cc4cc57cdf5ab8aef6471c9407585315ca9d628/gistfile1.txt\n
</span><span style="font-style:italic;color:#969896;"># This command will actually download the media from the links file we got from the above command \n
</span><span style="color:#323232;">wget -i links1.txt\n
</span>\n
```\n
\n
I was thinking about storing all of these. there is two ways of storing these. the first one is to just store the links.txt file and download the content when needed or we can download the content from the links save it to a hard drive. the second method will consume more space, so the first method is good imo.\n
\n
I hope it was something you like :)
"""
+type: "link"
+lang: "en"
+isOc: false
+hasEmbed: true
+commentCount: 1
+favouriteCount: 7
+score: 0
+isAdult: false
+sticky: false
+lastActive: DateTime @1728969781 {#1817
date: 2024-10-15 07:23:01.0 +02:00
}
+ip: null
+adaAmount: 0
+tags: null
+mentions: null
+comments: Doctrine\ORM\PersistentCollection {#1939 …}
+votes: Doctrine\ORM\PersistentCollection {#1932 …}
+reports: Doctrine\ORM\PersistentCollection {#1978 …}
+favourites: Doctrine\ORM\PersistentCollection {#1368 …}
+notifications: Doctrine\ORM\PersistentCollection {#2426 …}
+badges: Doctrine\ORM\PersistentCollection {#2439 …}
+children: []
-id: 33175
-titleTs: "'instagram':5 'link':3 'media':2 'scrape':1 'thread':7"
-bodyTs: "'/ghodawalaaman/f331d95550f64afac67a6b2a68903bf7/raw/7cc4cc57cdf5ab8aef6471c9407585315ca9d628/gistfile1.txt':118 'actual':122 'ai':73 'categor':78 'command':103,105,120,135 'consum':189 'content':167,175 'day':29 'download':91,107,123,165,173 'drive':184 'experi':20 'file':108,129,163 'first':155,194 'follow':102 'gist.githubusercontent.com':117 'gist.githubusercontent.com/ghodawalaaman/f331d95550f64afac67a6b2a68903bf7/raw/7cc4cc57cdf5ab8aef6471c9407585315ca9d628/gistfile1.txt':116 'good':18,197 'got':131 'hard':183 'hope':200 'howev':46 'imo':198 'instagram':9,43 'isn':53 'know':60 'learn':19 'like':205 'link':7,38,82,97,112,128,178 'links.txt':115,162 'links1.txt':138 'list':86 'lot':5 'mani':35 'media':37,52,64,81,94,125 'method':187,195 'need':169 'o':114 'one':156 'organ':85 'problem':48 'python':14 'random':80 'run':24,100 'save':179 'scrape':3,41 'script':26 'second':186 'see':33 'selenium':13 'someth':75,203 'space':191 'store':143,152,160 'tag':55 'think':141 'thread':11,45 'two':149 'type':62 'use':12 'want':89 'way':150 'wget':113,136 'wonder':68"
+cross: false
+upVotes: 0
+downVotes: 0
+ranking: 1706271759
+visibility: "visible "
+apId: "https://lemmy.ca/post/14074247"
+editedAt: null
+createdAt: DateTimeImmutable @1706251759 {#1854
date: 2024-01-26 07:49:19.0 +01:00
}
}
+magazine: App\Entity\Magazine {#264}
+image: null
+parent: null
+root: null
+body: """
What is your goal with this?\n
\n
I know, stupid question to a datahoarder. My point is that archiving all of instagram or threads would be impossible even for ArchiveTeam, much less a single person. Are these random posts, or ones you care about?
"""
+lang: "en"
+isAdult: false
+favouriteCount: 0
+score: 0
+lastActive: DateTime @1706368573 {#4047
date: 2024-01-27 16:16:13.0 +01:00
}
+ip: null
+tags: null
+mentions: [
"@kionite231@lemmy.ca"
]
+children: Doctrine\ORM\PersistentCollection {#4036 …}
+nested: Doctrine\ORM\PersistentCollection {#4034 …}
+votes: Doctrine\ORM\PersistentCollection {#4032 …}
+reports: Doctrine\ORM\PersistentCollection {#4030 …}
+favourites: Doctrine\ORM\PersistentCollection {#3998 …}
+notifications: Doctrine\ORM\PersistentCollection {#4002 …}
-id: 344809
-bodyTs: "'archiv':18 'archiveteam':29 'care':42 'datahoard':13 'even':27 'goal':4 'imposs':26 'instagram':21 'know':8 'less':31 'much':30 'one':40 'person':34 'point':15 'post':38 'question':10 'random':37 'singl':33 'stupid':9 'thread':23 'would':24"
+ranking: 0
+commentCount: 0
+upVotes: 0
+downVotes: 0
+visibility: "visible "
+apId: "https://lemm.ee/comment/8716912"
+editedAt: null
+createdAt: DateTimeImmutable @1706368573 {#3979
date: 2024-01-27 16:16:13.0 +01:00
}
} |
|
Show voter details
|
9 |
DENIED
|
edit
|
App\Entity\Magazine {#264
+icon: null
+name: "datahoarder@lemmy.ml"
+title: "datahoarder"
+description: """
**Who are we?**\n
\n
We are digital librarians. Among us are represented the various reasons to keep data – legal requirements, competitive requirements, uncertainty of permanence of cloud services, distaste for transmitting your data externally (e.g. government or corporate espionage), cultural and familial archivists, internet collapse preppers, and people who do it themselves so they’re sure it’s done right. Everyone has their reasons for curating the data they have decided to keep (either forever or For A Damn Long Time). Along the way we have sought out like-minded individuals to exchange strategies, war stories, and cautionary tales of failures.\n
\n
We are one. We are legion. And we’re trying really hard not to forget.\n
\n
– 5-4-3-2-1-bang from [this thread](https://web.archive.org/web/20221111153119/https://old.reddit.com/r/DataHoarder/comments/41tqt4/hi_guys_can_i_kindly_ask_for_an_eli5_of_this/cz53pi0/)
"""
+rules: null
+subscriptionsCount: 1
+entryCount: 58
+entryCommentCount: 311
+postCount: 1
+postCommentCount: 1
+isAdult: false
+customCss: null
+lastActive: DateTime @1729502222 {#274
date: 2024-10-21 11:17:02.0 +02:00
}
+markedForDeletionAt: null
+tags: null
+moderators: Doctrine\ORM\PersistentCollection {#252 …}
+ownershipRequests: Doctrine\ORM\PersistentCollection {#248 …}
+moderatorRequests: Doctrine\ORM\PersistentCollection {#237 …}
+entries: Doctrine\ORM\PersistentCollection {#195 …}
+posts: Doctrine\ORM\PersistentCollection {#153 …}
+subscriptions: Doctrine\ORM\PersistentCollection {#215 …}
+bans: Doctrine\ORM\PersistentCollection {#132 …}
+reports: Doctrine\ORM\PersistentCollection {#118 …}
+badges: Doctrine\ORM\PersistentCollection {#96 …}
+logs: Doctrine\ORM\PersistentCollection {#86 …}
+awards: Doctrine\ORM\PersistentCollection {#75 …}
+categories: Doctrine\ORM\PersistentCollection {#62 …}
-id: 32
+apId: "datahoarder@lemmy.ml"
+apProfileId: "https://lemmy.ml/c/datahoarder"
+apPublicUrl: "https://lemmy.ml/c/datahoarder"
+apFollowersUrl: "https://lemmy.ml/c/datahoarder/followers"
+apInboxUrl: "https://lemmy.ml/inbox"
+apDomain: "lemmy.ml"
+apPreferredUsername: "datahoarder"
+apDiscoverable: true
+apManuallyApprovesFollowers: null
+privateKey: null
+publicKey: null
+apFetchedAt: DateTime @1729303437 {#268
date: 2024-10-19 04:03:57.0 +02:00
}
+apDeletedAt: null
+apTimeoutAt: null
+visibility: "visible "
+createdAt: DateTimeImmutable @1695727192 {#270
date: 2023-09-26 13:19:52.0 +02:00
}
} |
|
Show voter details
|