404media.co

Poggervania, to privacy in Verizon Gave Her Data to a Stalker. ‘This Has Completely Changed My Life’
@Poggervania@kbin.social avatar

Bullshit, Verizon isn’t a victim at all - they fucked up, they should own up to their mistake instead of trying to go “me too!” to a situation where a stalker harassed their customer and their family after giving said stalker the customer’s personal information.

billbasher, to privacy in Google Researchers’ Attack Prompts ChatGPT to Reveal Its Training Data

Now will there be any sort of accountability? PII is pretty regulated in some places

far_university1990,

Get it to recite pieces of a few books, then let publishers shred them.

Atemu,
@Atemu@lemmy.ml avatar

Accountability? For tech giants? AHAHAHAAHAHAHAHAHAHAHAAHAHAHAA

Chozo,

I'd have to imagine that this PII was made publicly-available in order for GPT to have scraped it.

Touching_Grass,

large amounts of privately identifiable information (PII)

Yea the wording is kind of ambiguous. Are they saying it’s a private phone number or the number of a ted and sons plumbing and heating

Solumbran,

Publicly available does not mean free to use.

Touching_Grass,

Think it does

RenardDesMers,
@RenardDesMers@lemmy.ml avatar

According to EU law, PII should be accessible, modifiable and deletable by the targeted persons. I don’t think ChatGPT would allow me to delete information about me found in their training data.

Touching_Grass, (edited )

ban all European IPS from using these applications

But again, is this your information as in its random individuals or is this really some company roster listing CEOs it grabbed off some third party website that none of us are actually on and its being passed off as if its regular folks information

Catoblepas,

“Just ban everyone from places with legal protections” is a hilarious solution to a PII-spitting machine, thanks for the laugh.

Touching_Grass, (edited )

You’re pretentiously laughing at region locking. That’s been around for a while. You can’t untrain these AI. This PII which has always been publicly available and seems to be an issue only now is not something they can pull out and retrain. So if its that big an issue, region lock them. Fuck em. But again this doesn’t sound like Joe blow has information available. It seems more like websites that are scraping company details which these ai then scrape.

Catoblepas,

Lol.

Chozo,

It also doesn't mean it inherently isn't free to use, either. The article doesn't say whether or not the PII in question was intended to be private or public.

Davel23,

I could leave my car with the keys in the ignition in the bad part of town. It's still not legal to steal it.

Chozo,

Again, the article doesn't say whether or not the data was intended to be public. People post their contact info online on purpose sometimes, you know. Businesses and shit. Which seems most likely to be what's happened, given that the example has a fax number.

Dran_Arcana,

If someone had some theoretical device that could x-ray, 3d image, and 3d print an exact replica of your car though, that would be legal. That’s a closer analogy.

It’s not illegal to reverse-engineer and reproduce for personal use. It is questionably legal though to sell the reproduction. However, if the car were open-source or otherwise not copyrighted/patented it probably would be legal to sell the reproduction.

TSG_Asmodeus,
Dran_Arcana,

I absolutely would

j4k3,
@j4k3@lemmy.world avatar

Irrelevant! Your car is uploading you!

Turun,

I’m curious how accurate the PII is. I can generate strings of text and numbers and say that it’s a person’s name and phone number. But that doesn’t mean it’s PII. LLMs like to hallucinate a lot.

casmael,

Well now I have to pii again - hopefully that’s not regulated where I live (in my house)

jmankman, to privacyguides in Marketing Company Claims That It Actually Is Listening to Your Phone and Smart Speakers to Target Ads

Advertising continues to prove that it is a net negative in the world every time I see it

Aabbcc,

scribbles notes

Don’t be seen… Got it

Dr_Fetus_Jackson,

“You can always tell a Milford man.”

Texas_Hangover,

And its never satisfied, and gets progressively worse.

DBT, to upliftingnews in California Lawmakers Unanimously Pass Right to Repair Legislation

I’m really interested in why apple was so much against it before but are for it now. Maybe there’s an obvious reason, maybe not.

But I’m too tired to google this and dive further in.

saltesc,

When I worked at Apple, much of the repairing wasn’t modular. Simply device replacement at a high fixed cost. They’d then cannibalize the surrendered devices for parts or repair them cheap, then make them replacement devices for the next person to come in. It made huge money.

AProfessional,

It’s actually a very soft bill, it has no requirements to make hardware that is actually pro-consumer.

UltraMagnus0001,

the tools will probably cost as much as the device and the replacement parts will be locked, requiring Apple s expensive tools

0110010001100010,
@0110010001100010@lemmy.world avatar

Which is likely why they switched to supporting it. It was this or more strict requirements in the future.

circuitfarmer,
@circuitfarmer@lemmy.sdf.org avatar

That’s disheartening but I figured it had to be something like that. Ultimately then the danger will be thinking “great, now right to repair is fixed”, plus Apple gets to claim they were altruistic. Ugh.

donescobar,

We fixed right to repair in 2023 just like we fixed racism by electing Obama in 2008

Hamartiogonic,
@Hamartiogonic@sopuli.xyz avatar

Can’t wait to see how you fix your healthcare system.

bronzle,

Duh, ObamaCare, what more could we ask for?

dditty,

Apple won’t be forced to change their current business practice if soldering everything to the logic board, security chips disabling devices after repairs unless unlocked with their proprietary software, etc, so it won’t affect their monopolizing of the Apple repair market. They’ll just have to offer logic boards for sale with a one pg PDF showing how to replace the board, and maybe they’ll make the security software fix more available (which would still be huge). But 99% of their users likely wouldn’t do it themselves anyway.

Either way, this is still a huge step in the right direction though!

lobut,

Apple was against it because if you have parts, you can build counterfit iPhones and stuff (read about that rationale years ago, take it with a grain of salt). Also, the repair market is quite lucrative forcing customers to buy new devices than actually fixing them. They were doing this with iPods way back in the day with irreplaceable batteries or batteries so pricey, “you may as well buy a new one”.

No idea why they changed their tune. I could only imagine their revenue streams have leaned more into software now but I’m just an idiot online, what do I know.

zurohki,

If it’s made from all genuine parts from the manufacturer, is it really a counterfeit device?

Franzia,
Raisin8659,
@Raisin8659@monyet.cc avatar

Try asking Bing. It gives multiple possible answers with references. Still have to check the references anyway because sometimes the references don’t support the statements.

possiblylinux127, to privacy in Google Researchers’ Attack Prompts ChatGPT to Reveal Its Training Data

Now that’s interesting

iHUNTcriminals, to upliftingnews in Apple Formally Endorses Right to Repair Legislation After Spending Millions Fighting It

Which means it’s bullshit.

Bitrot,
@Bitrot@lemmy.sdf.org avatar

Apple released a self repair storefront, soon afterwards comes legislation that perfectly mirrors the way they operate their program. AFAIK Apple is still free to keep people from using old parts off eBay and such.

Martineski,
@Martineski@lemmy.fmhy.net avatar

Also you can’t stock parts from what I remember. You need to get the consumer first and only then you can order the parts when the official repair stores have their stock ready to make the 3rd part stores less attractive because of the slowness.

Septimaeus, (edited ) to privacy in Are Phones and Smart Speakers Listening to You? Cox Media Group Claims They Can | Cord Cutters News

I usually wear the tin foil hat in these debates, but I must concede in this case: the eavesdropping phone theory in particular is difficult to substantiate, from a technical standpoint.

For one, a user can check this themselves today with basic local network traffic monitors or packet sniffing tools. Even heavily compressed audio data will stand out in the log, no matter how it’s encrypted, streamed, batched or what have you.

To get a sense of what I mean, run wireshark and give a wake phrase command to see what that looks like. Now imagine trying to obfuscate that type of transmission for audio longer than 2 seconds, and repeatedly throughout a day.

Even assuming local audio inference and processing on a completely compromised device (rooted/jailbroken, disabled sandboxing/SIP, unrestricted platform access, the works) most phones will just struggle to do that recording and processing indeterminately without a noticeable impact on energy and data use.

I’m sure advertising companies would love to collect that much raw candid data. It would seem quite a challenge to do so quietly, however, and given the apparent lack of evidence, is thus unlikely to have been implemented at any kind of scale.

Cheradenine,

Fucking thank you. As I said in another reply, if this was true my firewall logs would be full, or my data cap blown in a week.

library_napper, (edited )
@library_napper@monyet.cc avatar

What if the processing is done locally and the only thing they send back home is keywords for marketable products?

Septimaeus, (edited )

Yeah they’d have to it seems, but real time transcription isn’t free. Even late model devices with better inference hardware have limited battery and energy monitoring. I imagine it’d be hard to conceal that behavior especially for an app recording in the background.

WetBeardHairs@lemmy.ml mentioned that mobile devices use the same hardware coprocessing used for wake word behavior to target specific key phrases. I don’t know anything about that, but it’s one way they could work around the technical limitations.

Of course, that’s a relatively bespoke hardware solution that might also be difficult to fully conceal, and it would come with its own limitations. Like in that case, there’s a preset list of high value key words that you can tally, in order to send company servers a small “score card” rather than a heavy audio clip. But the data would be far less rich than what people usually think of with these flashy headlines (your private conversations, your bowel movements, your penchant for musical theater, whatever).

Fungah,

My own theory is that they tokenize key words and phrases with an AI so that they’re not sending the actual audio data. Then it’s stored in a form some AI can parse but isn’t technically user data so they can skirt legislation around that.

A tokenized collection of key phrases omitting delimiters in text format is going be much, much less than audio, or a transcript.

Septimaeus,

That certainly would make the data smuggling easier. What about battery though? I assume that requires inference and at least rudimentary processing.

How would a background process do this in real time on a mobile device without leaving traceable evidence like cpu time?

BrownTree33,

Can it be implemented on pc? They often turned on and people speak around them too. Cpu activity much harder to trace when there are a lot of different processes. Someone can blame their phone, while it listening pc near by.

Septimaeus,

Yeah outside mobile devices I imagine there’s a lot more leeway technically speaking. I’d be far more inclined to suspect a smart TV or a home assistant appliance like Amazon Echo, for example. And certainly there are plenty of PCs out there that are 100% compromised.

But it’s the phone that people often think of as eavesdropping on their conversations. The idea is stickier perhaps because it’s a more personal violation. And I wouldn’t put it past data brokers by any means. They would if they could. I’ve just yet to hear a feasible explanation of how they can without being caught. Hence my doubt.

steveman_ha,

What if its not streaming? What if its just cached for future access, e.g. next time the user opens the app (and network traffic spikes anyways) maybe?

Septimaeus,

That’s possible too, and in general I’d think a foreground application currently in use alleviates most of the technical restrictions mentioned (read: why we never install FB).

But again we must assume some uncommon device privileges and we still haven’t solved the problem of background energy usage required to record and/or process a real time feed.

Mossheart,

Or plugs in their phone at night, bypassing energy use concerns?

BigPotato,

Cox also sells home automation bundles which advertise “smart” features like voice recognition which are always plugged into the wall.

ben_dover,

as someone who has played around with offline speech recognition before - there is a reason why ai assistants only use it for the wake word, and the rest is processed in the cloud: it sucks. it’s quite unreliable, you’d have to pronounce things exactly as expected. so you need to “train” it for different accents and ways to pronounce something if you want to capture it properly, so the info they could siphon this way is imho limited to a couple thousand words. which is considerable already, and would allow for proper profiling, but couldn’t capture your interest in something more specific like a mazda 323f.

but offline speech recognition also requires a fair amount of compute power. at least on our phones, it would inevitably drain the battery

andrew_bidlaw,
@andrew_bidlaw@sh.itjust.works avatar

most phones will just struggle to record and process audio indeterminately without a noticeable impact on energy and data use.

I mean, it’s still a valid concern for a commoner. Why my phone has twice the ram and twice the cores and is as slow as my previous one? I’d love to fuel this conspiracy into OS, app makers to do their fucking job.

There’s no reason an app can weight more than 50mb on clean install*, and many socials, messengers fail to fit in. A client I use to write this is only 30+, and that’s one person doing that for donations.

If there could be a raging theory that apps are selling your data to, like, China, there would be a push to decline it and optimize apps to fit that image.

  • I obviously exclude games, synths, editors of any kind with their textures and templates.
WetBeardHairs,

The filesize of most binaries is dominated by text strings and images. Modern applications are loaded with them. Lemmy is atypical in that it doesn’t need tons of built in images or text.

andrew_bidlaw,
@andrew_bidlaw@sh.itjust.works avatar

I get it. It’s just I don’t see any dev-put images in many big apps, besides a logo and a welcome screen. Updating them with dozens of megabytes doesn’t feel okay. It seems like there’s some bloat, or a vault management problems. Like in some seasonally updated games that put dupes to speed up load of a map or easily add new content on top of them instead of redownloading a brand new db. Some I followed shawed off tens of gigabytes by rearranging stuff.

Like, messengers. I don’t get it how Viber wants more than 40+ mb per update having nothing but stickers, emoji already installed and probably don’t change them much. Cheap wireless connection could allow them to ignore that for some reason and start to get heavier in order to offload some from their servers, for many images are localized. Is that probably what their updates are? Or they consequentially add beta patches after an approval, so you download a couple of them in a close succession after they get into public?

Goun,

I agree.

What could be possible, would be maybe send tiny bits. For example, a device could categorize some places or times, detect out of pattern behaviours and just record a couple of seconds here and there, then send it to the server when requesting something else to avoid being suspicious. Or just pretend it’s a “false positive” or whatever and say “sorry, I didn’t get that.”

I don’t think they’re listening to everything, but they could technically get something if they wanted to target you.

Septimaeus, (edited )

Right, I suppose cybersecurity isn’t so different than physical security in that way. Someone who really wants to get to you always can (read: why there are so many burner phones at def con).

But for the average person, who uses consumer grade deadbolts in their home and doesn’t hire a private detail when they travel, does an iPhone fit within their acceptable risk threshold? Probably.

admiralteal,

There's also a totally plausible and far more insidious answer to what's going on with the experiences people have of the ads matching their conversations.

That explanation is advertising works. And worse, it works subconsciously. That you're seeing the ads and don't even notice you're seeing them and then they're worming their way into your conversations at which point you become more aware of them and then start noticing the ads.

Which does comport with the billions of dollars spent on advertising every year. It would be very weird if an entire ad industry that's at least a century old was all a complete nonsense waste of money this whole time.

To me, this whole narrative is just another parable about why we need to do everything possible to limit our own exposure to ads to avoid being manipulated.

Septimaeus, (edited )

Damn, I hadn’t thought of that. The chicken egg question of spooky ad relevance. Insidious indeed.

I feel like the idea of some person or group having enough info to psychologically manipulate or predict should be way scarier than the black helicopter stuff, especially given that it’s one of the few conspiracy theories we actually have a bunch of high quality evidence for, just in marketing and statistics textbooks alone.

But here we are. Government surveillance is the hot button, not the fact that marketers would happily sock puppet you given the chance.

Zerush, (edited )
@Zerush@lemmy.ml avatar

Smartphones by definition are Spyware, at least if you use the OS as is, because in them all aspects are controlled and logged, either by Google on Android or by Apple on iOS. Adding the default apps that cannot be uninstalled on a mobile that is not rooted. As COX alleges, they also use third-party logs and therefore can track and profile the user very well, even without using this technology that they claim to have.

Although they feel authorized by the user’s consent to the TOS and PP, the legality depends directly on the legislation of each country. TOS and PP itself, to be a legal contract, must comply in all its points with local legislation to be applicable to the user. For this reason, I think that these practices are very different in the EU from those in the US, where legislation regarding privacy is conspicuous by its absence, that is, that US users should take these COX statements very seriously in their devices, although in the EU they must also be clear that Google and Apple know exactly what they do and where users live, although they are limited from selling this data to third parties.

Basics:

– READ ALWAYS TOS AND PP

  • Review the permissions of each app, leaving only the most essential ones
  • Desactivate GPS if not used
  • Review in Android every app with Exodus Privacy, maybe Lookout or MyCyberHome in iOS (Freemium apps !!!)
  • Use as less possible apps from the store
  • Be aware of discount apps from the Supermarket or Malls
  • Don’t store important data in the Phone (Banking, Medical…)
Septimaeus, (edited )

Agreed, though I think it’s possible to use smart devices safely. For Android it can be difficult outside custom roms. The OEM flavors tend to have spyware baked in that takes time and root to fully undo, and even then I’m never sure I got it all. These are the most common phones, however, especially in economy price brackets, which is why I’d agree that for the average user most phones are spyware.

Flashing is not useful advice to most. “Just root it bro” doesn’t help your nontechnical relatives who can’t stop downloading toolbars and VPN installers. But with OEM variants undermining privacy at the system level, it feels like a losing battle.

I’d give credit to Apple for their privacy enablement, especially with E2EE, device lockdown, granular access permission control and audits. Unfortunately their devices are not as affordable and I’m not sure how to advise the average Android user beyond general opt-out vigilance.

Zerush,
@Zerush@lemmy.ml avatar
Septimaeus, (edited )

Yeah those push token systems need an overhaul. IIRC tokens are specific to app-device combinations, so invalidation that isn’t automatic should be push-button revocation. Users should have control of it like any other API on their device, if only to get apps to stop spamming coupons or whatever.

It’s funny though: when I first saw those headlines, my first reaction was that it was a positive sign, since this was apparently news worthy even though the magnitude of impact for this sort of systemic breach is demonstrably low. (In particular, it pertains to (1) incidental high-noise data (2) associated with devices and (3) available only by request to (4) governments, who are weak compared to even the smallest data brokers WRT capacity for data mining inference and redistribution, to put it mildly.)

Regardless, those systems need attention.

WetBeardHairs,

That is glossing over how they process the data and transmit it to the cloud. The assistant wake word for “Hey Google” invokes an audio stream to an off site audio processor in order to handle the query. So that is easy to identify via traffic because it is immediate and large.

The advertising-wake words do not get processed that way. They are limited in scope and are handled by the low power hardware audio processor used for listening for the assistant wake word. The wake word processor is an FPGA or ASIC - specifically because it allows the integration of customizable words to listen for in an extremely low power raw form. When an advertising wake word is identified, it sends an interrupt to the CPU along with an enumerated value of which word was heard. The OS then stores that value and transmits a batch of them to a server at a later time. An entire day’s worth of advertising wake word data may be less than 1 kb in size and it is sent along with other information.

Good luck finding that on wireshark.

Septimaeus, (edited )

Hmm, that’s outside my wheelhouse. So you’re saying phone hardware is designed to listen for not just one but multiple predefined or reprogrammable bank of wake words? I hadn’t read about that yet but it sounds more feasible than the constant livestream idea.

The echo had the capacity for multiple wake words IIRC, but I hadn’t heard of that for mobile devices. I’m curious how many of these key words can they fit?

gerryflap, to privacy in Google Researchers’ Attack Prompts ChatGPT to Reveal Its Training Data
@gerryflap@feddit.nl avatar

Obviously this is a privacy community, and this ain’t great in that regard, but as someone who’s interested in AI this is absolutely fascinating. I’m now starting to wonder whether the model could theoretically encode the entire dataset in its weights. Surely some compression and generalization is taking place, otherwise it couldn’t generate all the amazing responses it does give to novel inputs, but apparently it can also just recite long chunks of the dataset. And also why would these specific inputs trigger such a response. Maybe there are issues in the training data (or process) that cause it to do this. Or maybe this is just a fundamental flaw of the model architecture? And maybe it’s even an expected thing. After all, we as humans also have the ability to recite pieces of “training data” if we seem them interesting enough.

Cheers,

They mentioned this was patched in chatgpt but also exists in llama. Since llama 1 is open source and still widely available, I’d bet someone could do the research to back into the weights.

Socsa,

Yup, with 50B parameters or whatever it is these days there is a lot of room for encoding latent linguistic space where it starts to just look like attention-based compression. Which is itself an incredibly fascinating premise. Universal Approximation Theorem, via dynamic, contextual manifold quantization. Absolutely bonkers, but it also feels so obvious.

In a way it makes perfect sense. Human cognition is clearly doing more than just storing and recalling information. “Memory” is imperfect, as if it is sampling some latent space, and then reconstructing some approximate perception. LLMs genuinely seem to be doing something similar.

j4k3,
@j4k3@lemmy.world avatar

I bet these are instances of over training where the data has been input too many times and the phrases stick.

Models can do some really obscure behavior after overtraining. Like I have one model that has been heavily trained on some roleplaying scenarios that will full on convince the user there is an entire hidden system context with amazing persistence of bot names and story line props. It can totally override system context in very unusual ways too.

I’ve seen models that almost always error into The Great Gatsby too.

TheHobbyist,

This is not the case in language models. While computer vision models train over multiple epochs, sometimes in the hundreds or so (an epoch being one pass over all training samples), a language model is often trained on just one epoch, or in some instances up to 2-5 epochs. Seeing so many tokens so few times is quite impressive actually. Language models are great learners and some studies show that language models are in fact compression algorithms which are scaled to the extreme so in that regard it might not be that impressive after all.

j4k3, (edited )
@j4k3@lemmy.world avatar

How many times do you think the same data appears after a model has as many datasets as OpenAI is using now? Even unintentionally, there will be some inevitable overlap. I expect something like data related to OpenAI researchers to reoccur many times. If nothing else, overlap in redundancy found in foreign languages could cause overtraining. Most data is likely machine curated at best.

LemmyIsFantastic, (edited ) to privacy in Are Phones and Smart Speakers Listening to You? Cox Media Group Claims They Can | Cord Cutters News

And yet thousands of security researchers can’t find a shed of evidence. This shit is tiresome and counter productive. The general public is weary of hearing this made up bullshit.

The technical practice isn’t hard. That’s the claim. The reality is nobody is buying shit doing this and this is just another repost from the same 404 article months ago.

JSens1998,

Bro, I’ll literally be having a conversation with someone about a topic, and all of the sudden Google starts recommending me products related to the discussion afterwards. Smart phones and smart speakers without a doubt listen in on our conversations. There’s the evidence.

LemmyIsFantastic, (edited )

Find a literal shred of evidence. You have no clue how ads work bruh.

library_napper, (edited )
@library_napper@monyet.cc avatar

Eh, surprised that’s happening to someone in this community. Strip Google off your phone and throw out any hardware with a microphone that doesn’t run open source software and this will stop happening.

elbarto777,

That’s not evidence. That’s some random anecdote. Back it up or gtfo.

earmuff, to privacy in Google Researchers’ Attack Prompts ChatGPT to Reveal Its Training Data

Now do the same thing with Google Bard.

ForgotAboutDre,

They are probably publishing this because they’ve recently made bard immune to such attack. This is google PR.

Artyom,

Generative Adversarial GANs

WaxedWookie,

Why bother when you can just do it with Google search?

Cosmicomical, to privacy in Verizon Gave Her Data to a Stalker. ‘This Has Completely Changed My Life’

“We are taking every step possible to work with the police so they can identify them.”

Yeah just make sure it's the actual police.

regalia, to upliftingnews in Apple Formally Endorses Right to Repair Legislation After Spending Millions Fighting It

I wouldn’t call this uplifting. They’re doing so I’m bad faith as this almost entirely is focused on them.

salton,

They will find a way to make it as inconvenient and uneconomical as possible while publishing how environmentally conscious they are.

Chais, to upliftingnews in Apple Formally Endorses Right to Repair Legislation After Spending Millions Fighting It
@Chais@sh.itjust.works avatar

Judge them by their actions, not their words. youtu.be/r0Hwb5xvBn8

plain_and_simply, to privacy in Verizon Gave Her Data to a Stalker. ‘This Has Completely Changed My Life’

Seriously? What a stupid mistake to make. There should always be internal processes right?

ricecake,

Yup. I used to work for a much smaller tech company, and we had a perfectly reasonable process for dealing with cour orders and search warrants that involved crazy things like “get it in hard copy”, and “verify the information contained in the order”.
For some things, we would even just ask the officer to physically come in and that was weirdly never a problem.

sqgl,

And now they will probably overcompensate with frustrating security theatre beyond sensible precautions.

admiralteal,

I see no problem whatsoever with having frustrating levels of obtuse security required before complying with a request from law enforcement.

There is no downside.

sunbrrnslapper, to upliftingnews in California Lawmakers Unanimously Pass Right to Repair Legislation

Does this mean that the ice cream machines at McDonald’s will be working? 😉

Seriously though, this is great news.

WtfEvenIsExistence,

Right to repair doesn’t mean Mcdonalds actually care about fixing it. Their food normally take like shit anyways and orders are wrong all the time. Doubt they care about maintaining a machine.

dannoffs,
@dannoffs@lemmy.sdf.org avatar

I think I saw somewhere that their machines are always broken because they’re locked into a contract with a specific service company that sucks but that might just be McPropoganda

sunbrrnslapper,

Oh, I know. I was just being silly. But my hope springs eternal.

glittalogik,
@glittalogik@kbin.social avatar
  • All
  • Subscribed
  • Moderated
  • Favorites
  • localhost
  • All magazines
  • Loading…
    Loading the web debug toolbar…
    Attempt #

    Fatal error: Allowed memory size of 134217728 bytes exhausted (tried to allocate 18878464 bytes) in /var/www/kbin/kbin/vendor/symfony/http-kernel/Profiler/FileProfilerStorage.php on line 171

    Fatal error: Allowed memory size of 134217728 bytes exhausted (tried to allocate 1343488 bytes) in /var/www/kbin/kbin/vendor/symfony/error-handler/Resources/views/logs.html.php on line 38