POST https://kbin.spritesserver.nl/f/inbox

Messages

Ordered list of dispatched messages across all your buses

"App\Message\ActivityPub\Inbox\ActivityMessage"
Caller In SharedInboxController.php line
Bus messenger.bus.default
Message
App\Message\ActivityPub\Inbox\ActivityMessage {#353
  +payload: "{"@context":["https://join-lemmy.org/context.json","https://www.w3.org/ns/activitystreams"],"actor":"https://lemmy.ml/c/opensource","to":["https://www.w3.org/ns/activitystreams#Public"],"object":{"id":"https://lemmygrad.ml/activities/update/886f2d37-b87b-4e62-a423-dffaf4474311","actor":"https://lemmygrad.ml/u/pcalau12i","@context":["https://join-lemmy.org/context.json","https://www.w3.org/ns/activitystreams"],"to":["https://www.w3.org/ns/activitystreams#Public"],"object":{"type":"Note","id":"https://lemmygrad.ml/comment/5910425","attributedTo":"https://lemmygrad.ml/u/pcalau12i","to":["https://www.w3.org/ns/activitystreams#Public"],"cc":["https://lemmy.ml/c/opensource","https://lemmy.sdf.org/u/Dyf_Tfh"],"content":"<p>The 1.5B/7B/8B/13B/32B/70B models are all officially DeepSeek R1 models, that is what DeepSeek themselves refer to those models as. It is DeepSeek themselves who produced those models and released them to the public and gave them their names. And their names are correct, it is just factually false to say they are not DeepSeek R1 models. They are.</p>\n<p>The “R1” in the name means “reasoning version one” because it does not just spit out an answer but reasons through it with an internal monologue. For example, here is a simple query I asked DeepSeek R1 13B:</p>\n<blockquote>\n<p>Me: can all the planets in the solar system fit between the earth and the moon?</p>\n<p>DeepSeek: Yes, all eight planets could theoretically be lined up along the line connecting Earth and the Moon without overlapping. The combined length of their diameters (approximately 379,011 km) is slightly less than the average Earth-Moon distance (about 384,400 km), allowing them to fit if placed consecutively with no required spacing.</p>\n</blockquote>\n<p>However, on top of its answer, I can expand an option to see its internal monologue it went through before generating the answer, which <a href=\"https://pastebin.com/raw/qt69w6g6\" rel=\"nofollow\">you can find the internal monologue here</a> because it’s too long to paste.</p>\n<p>What makes these consumer-oriented models different is that that rather than being trained on raw data, they are trained on synthetic data from pre-existing models. That’s what the “Qwen” or “Llama” parts mean in the name. The 7B model is trained on synthetic data produced by Qwen. However, neither Qwen nor Llama can “reason,” they do not have an internal monologue. This is why it is just incorrect to claim that something like DeepSeek R1 7B Qwen Distill has no relevance to DeepSeek R1 but is just a Qwen model. If it’s supposedly a Qwen model, why is it that it can do something that Qwen cannot do but only DeepSeek R1 can?</p>\n<p>It’s because, again, <em><strong>it is a DeepSeek R1 model, produced using a very similar training process as the full-sized model, but just using less parameters and synthetic data.</strong></em></p>\n","inReplyTo":"https://lemmy.sdf.org/comment/17528442","mediaType":"text/html","source":{"content":"The 1.5B/7B/8B/13B/32B/70B models are all officially DeepSeek R1 models, that is what DeepSeek themselves refer to those models as. It is DeepSeek themselves who produced those models and released them to the public and gave them their names. And their names are correct, it is just factually false to say they are not DeepSeek R1 models. They are.\n\nThe \"R1\" in the name means \"reasoning version one\" because it does not just spit out an answer but reasons through it with an internal monologue. For example, here is a simple query I asked DeepSeek R1 13B:\n\n> Me: can all the planets in the solar system fit between the earth and the moon? \n> \n> DeepSeek: Yes, all eight planets could theoretically be lined up along the line connecting Earth and the Moon without overlapping. The combined length of their diameters (approximately 379,011 km) is slightly less than the average Earth-Moon distance (about 384,400 km), allowing them to fit if placed consecutively with no required spacing.\n\nHowever, on top of its answer, I can expand an option to see its internal monologue it went through before generating the answer, which [you can find the internal monologue here](https://pastebin.com/raw/qt69w6g6) because it's too long to paste.\n\nWhat makes these consumer-oriented models different is that that rather than being trained on raw data, they are trained on synthetic data from pre-existing models. That's what the \"Qwen\" or \"Llama\" parts mean in the name. The 7B model is trained on synthetic data produced by Qwen. However, neither Qwen nor Llama can \"reason,\" they do not have an internal monologue. This is why it is just incorrect to claim that something like DeepSeek R1 7B Qwen Distill has no relevance to DeepSeek R1 but is just a Qwen model. If it's supposedly a Qwen model, why is it that it can do something that Qwen cannot do but only DeepSeek R1 can?\n\nIt's because, again, ***it is a DeepSeek R1 model, produced using a very similar training process as the full-sized model, but just using less parameters and synthetic data.*** ","mediaType":"text/markdown"},"published":"2025-02-01T02:43:33.046781Z","updated":"2025-02-01T02:45:11.227158Z","tag":[{"href":"https://lemmy.sdf.org/u/Dyf_Tfh","name":"@Dyf_Tfh@lemmy.sdf.org","type":"Mention"}],"distinguished":false,"language":{"identifier":"en","name":"English"},"audience":"https://lemmy.ml/c/opensource"},"cc":["https://lemmy.ml/c/opensource","https://lemmy.sdf.org/u/Dyf_Tfh"],"tag":[{"href":"https://lemmy.sdf.org/u/Dyf_Tfh","name":"@Dyf_Tfh@lemmy.sdf.org","type":"Mention"}],"type":"Update","audience":"https://lemmy.ml/c/opensource"},"cc":["https://lemmy.ml/c/opensource/followers"],"type":"Announce","id":"https://lemmy.ml/activities/announce/update/077b9dfc-c2b0-4d31-99b5-601d352b726c"}"
  +request: [
    "host" => "kbin.spritesserver.nl"
    "method" => "POST"
    "uri" => "/f/inbox"
    "client_ip" => "54.36.178.108"
  ]
  +headers: [
    "content-type" => [
      "application/activity+json"
    ]
    "host" => [
      "kbin.spritesserver.nl"
    ]
    "date" => [
      "Sat, 01 Feb 2025 02:45:39 GMT"
    ]
    "digest" => [
      "SHA-256=Ub9LEoPRSEzkNewI1B2s+toVQhzWXdvmtpcXCy6nW+U="
    ]
    "signature" => [
      "keyId="https://lemmy.ml/c/opensource#main-key",algorithm="hs2019",headers="(request-target) content-type date digest host",signature="XNhT06j6Q6w3s/Rto8+siP/HGewq52ADaku0J8qHqyPWr3kd9eaOjhJFXoz0n63OyL7/XQc4l6r3JRGH67u6w3ybqwlXYcrNHIBj4HW20UsmcpUa8ihFPtWdtFaIZJibtcsX1lferqSIUiPyensq/ay752G8xVvf1cGbWoYujI7R9etqdECndPKlzjyr2jcmGhel4h5ztoSKppXpPAiTH0TdFJ5rABywn8AK4xNa3TLFq+tEJhMG6S1E8vMfF3JZEtTEoxFkn/eEgXollc0Un6RQfTy8PINBMuy5zgOS82RNMoq7YFD6KdrKkBlhrU0SHGT9/HoQe/X0PonWhh05kQ==""
    ]
    "accept" => [
      "*/*"
    ]
    "user-agent" => [
      "Lemmy/0.19.8; +https://lemmy.ml"
    ]
    "accept-encoding" => [
      "gzip"
    ]
    "content-length" => [
      "5800"
    ]
    "x-php-ob-level" => [
      "1"
    ]
  ]
}
Envelope stamps when dispatching No items
Envelope stamps after dispatch
Symfony\Component\Messenger\Stamp\BusNameStamp {#343
  -busName: "messenger.bus.default"
}
Symfony\Component\Messenger\Stamp\SentStamp {#268
  -senderClass: "Symfony\Component\Messenger\Bridge\Doctrine\Transport\DoctrineTransport"
  -senderAlias: "async_ap"
}
Symfony\Component\Messenger\Stamp\TransportMessageIdStamp {#227
  -id: "48504948"
}