chitak166,

Imagine all the work we wouldn’t have to re-do if we had just done it right the first time.

Stallman was right, as usual.

manito_manopla,
@manito_manopla@lemmy.ml avatar

How is this different from DXVK?

FehrIsFair,

It’s made to interact directly with the GPU instead of translating it to the equivalent GPU call in Vulkan.

equinox,
@equinox@hexbear.net avatar

Iirc, DXVK translates DirectX API calls to Vulkan calls, meaning the original game renders to Vulkan in the end. With this, no translation will be needed which should result in slightly better performance and more likely, much better compatibility.

Urist,
@Urist@lemmy.ml avatar

IIRC the translation overhead is usually negligible and sometimes results in better performance due to Vulkan being very performant.

azvasKvklenko, (edited )

This is barely explained and the readme gave me more questions than answers.

I immediately thought it’s going to be a library for Wine to use instead of DXVK/VKD3D.

If that’s only for developers to build Linux ports, very little to no real-world use is expected, unless it’s somehow can offer effortless conversions. Even then developers are likely to prefer relying on Proton/Wine to simply have single binary for both platforms, rather than maintaining them separately.

I wonder how much work it will take for drivers to support the API… Or maybe it won’t need anything in Mesa and will somehow work directly on DRM with strictly platform-agnostic code if that’s possible?

Offering better performance than the likes of DXVK is brave to put it mildly. In many scenarios it can already match or surpass native Windows performance even when running Windows binaries.

jackpot, (edited )
@jackpot@lemmy.ml avatar

This is barely explained and the readme gave me more questions than answers.

make a pull request to change the readme then

uis,
@uis@lemmy.world avatar

Doesn’t DirectX require a lot of stuff from winapi?

Urist,
@Urist@lemmy.ml avatar

Thought so as well. In which case I do not really see much difference between this and other translation layers.

library_napper,
@library_napper@monyet.cc avatar

One of these days I’ll be able to play quake in QubesOS

CabbageColonialist,

I use Qubes btw. But you wouldnt even know.

uis,
@uis@lemmy.world avatar

Play Xonotic in Linux. Or Quake.

library_napper,
@library_napper@monyet.cc avatar

Doesn’t work in Qubes

uis,
@uis@lemmy.world avatar

Xonotic or Quake?

viking,
@viking@infosec.pub avatar

Is anyone still playing Xonotic? I used to play Nexuiz back before they sold the name, and tried Xonotic recently, only to find servers with maximum one other player idling around. I genuinely thought it’s dead.

Chewy7324,

I haven’t played for a year or two, but Xonotic doesn’t have many concurrent player for most of the day. I believe lobbies filled around evening/night UTC±0, iirc.

uis,
@uis@lemmy.world avatar

I know 4 major servers: Feris and 3 Jeff’s. Feris is most populated one during European night. And there are pickup servers that usually play 4v4. Also smaller servers exist that occasionally get players like xonotic-relax.ru.

AlmightySnoo,

That repo is just pure trolling, read the “Improved performance” section and open some source files and you’ll understand why.

djtech, (edited )
hare_ware,

How would a native implementation be better than DXVK? Wouldn’t develops still need to port the rest of their app to Linux to use it? At that point, you could still just include DXVK, would the performance really be that much worse?

ziggurat,

Native vulkan or opengl games doesn’t need to translate thees calls, if directx could run. Natively on Linux, it wouldn’t have to be translated

leopold, (edited )

Afaik the only way to avoid translating into OpenGL and Vulkan would be to write native drivers. Stuff like gallium-nine, for instance. Is that what this project is doing? Though obviously that’s just for the Direct3D side of things and there’s a lot more to DirectX than just that. Still, it’s hard not to question how much of this is just duplicating work already done for Wine.

GustavoM,
@GustavoM@lemmy.world avatar

Asides from “ew installing Winblows stuff in my distro ewwww” that will be a gamechanger if they do it right.

WeLoveCastingSpellz,

Holly Fuck!

troyunrau,
@troyunrau@lemmy.ca avatar

Poor Holly

filister,

Noob here, but can someone explain to me what’s the advantage of DirectX vs Vulkan, apart from being around for longer? And why do more developers embrace Vulkan for better portability?

velox_vulnus,

Also a noob, but from what I understand, Vulkan is more low-level.

filister,

Does this make it harder to implement?

velox_vulnus, (edited )

Printing a gradient triangle using C, in OpenGL, takes about a few 100-130 lines - it could be lesser, I think. In Vulkan, it takes about a thousand lines.

Source: I wrote a “simple” gradient triangle in Vulkan, using C during my free time. Created the gradient triangle in C as a part of my university coursework.

KingThrillgore,
@KingThrillgore@lemmy.ml avatar

It takes 75 lines to draw a blank window. It takes like three in CoreAnimation in macOS. We really need an OSS take on CoreAnimation but I’m also fine leaving the graphics work to a game engine.

Ansis,

Lower level means you have more control over the small details. However, that also means that you have to reimplement some things from scratch, while higher level frameworks do those things for you.

heartsofwar, (edited )

^ this is the key

There were two major problems with OpenGL:

  • It was originally designed and intended as a professional software (high-level) 3D CAD API; not gaming
  • Extensive changes to the API were constantly being submitted by different vendors (AMD (ATI), Nvidia, Microsoft, etc) to enhance its performance on their respective hardware in their respective situations.

This meant that almost every API change that was submitted by any one vendor was immediately scrutinized as whether it was for gaming or 3D CAD, and usually disliked for adding bloat that the other vendors didn’t need or worse causing hardware conflicts which often lead to degradation in performance for the other vendors.

This is exactly why Nvidia bundles their own version of OpenGL with their drivers; they can make the changes immediately and release to see the impact of the API changes without approval and if it does well-enough then submit. At the end of the day though, some submissions are accepted and others are not which means Nvidia then has to maintain the changes on their own… so there is benefit to getting the API changes accepted.

Microsoft actually blazed the path that Nvidia took; Windows use to (might still… not sure) ship with its own version of OpenGL binaries, but they disliked having to maintain the changes and fight for acceptance enough that they decided to eventually develop DirectX (among other desires to access input and audio, etc).

DirectX 3D and Vulkan (based on AMD’s Mantle which was inspired by DirectX 12 3D) do not have these issues because both are low-level APIs which means that most of the code that would be specific to the GPU or AMD (ATI), Nvidia, etc is not hard-coded like OpenGL on the driver side… it is done by the application.

teawrecks,

I think you are confused about the difference between the opengl spec and an actual implementation of the spec, and who is responsible for shipping what.

  • Nvidia ships their own opengl implementation with their drivers, because that’s what a driver is.
  • Microsoft doesn’t ship “opengl binaries”, they don’t have any hardware. Maybe you mean they published their own fork of the ogl spec before giving up and making DX? That may be true.
  • Mantle predates DX12, both vulkan and dx12 took inspiration from it, not the other way around.
  • There are two interpretations being thrown around for “low level”:
    • The more traditional meaning is “how far are you from natively talking to hardware?” which is not determined by the rendering API, but the specific implementation. Ex. Nvidia’s dx9 driver is equally “low level” as their DX12 driver, in that the API calls you make are 1 step away from sending commands directly to GPU hardware. Meanwhile, using DX12 via DXVK would be 2 steps away from hardware, which is “higher level” than just using Nvidia’s DX9 implementation directly. Again, “level” is not determined by the API.
    • the other interpretation is what I would call “granularity” or “terse-ness” of the API, i.e. how much control over the hardware does it expose. In this case, yes, dx12 and vulkan give finer control over the hardware vs dx9 and ogl.
  • your last statement…doesn’t make sense, I don’t understand it. Maybe you’re trying to say that DX12/VK are made to be thinner, with less internal state tracking and less overhead per call, and therefore now all that state tracking is the app’s responsibility? Yes, that is true. But I wouldn’t say that code is “specific to a GPU”.
heartsofwar, (edited )

Nvidia ships their own opengl implementation with their drivers, because that’s what a driver is.

Including OpenGL does not a driver make… ie. Nvidia doesn’t have to ship their own implementation of OpenGL. They could do what AMD does on Linux and rely on the openGL upstream implementation from Mesa; however, they choose not to do so because of the reasons I outlined among others.

Microsoft doesn’t ship “opengl binaries”, they don’t have any hardware.

There was a time they did, yes, before Direct X existed

Maybe you mean they published their own fork of the ogl spec before giving up and making DX? That may be true.

No, they made their own contributions to the spec to improve Windows game performance, but didn’t publish their own spec; however they did implement the upstream spec with their contributions and ship them integrated into Windows. This was practically over with by 1995 when Direct X was introduced, so a very long time ago

Mantle predates DX12, both vulkan and dx12 took inspiration from it, not the other way around.

Yes and No… DirectX 3D was always low-level; its why DirectX (among being a one-stop shop) worked so well for XBox, etc. So, AMD got the idea for Mantle from MS Direct X and when AMD met with Khronos to spin off Vulkan, MS took notice that their implementation was not as low-level as Direct X 11 and they actually made Direct X 12 less low-level dependent.

Ex. Nvidia’s dx9 driver is equally “low level” as their DX12 driver

No its not, see above… Direct X 9 is actually much lower level than 12; however, Direct X 12 has many more requirements for certain tech that games today see as necessary that Direct X 9 didn’t

dx12 and vulkan give finer control over the hardware vs dx9 and ogl.

yes and no… depends on the particular portion of the spec you are talking about. For example, Direct X 9 had much more lower leve control of the CPU, but as time moved on and less CPU reliance became a thing, DirectX 12 has less control of the CPU but more control of the GPU.

teawrecks,

So, here’s the thing, I don’t consider myself an expert in many things, but this subject is literally my day job, and it’s possibly the only thing I do consider myself an expert in. And I’m telling you, you are confused and I would gladly help clear it up if you’ll allow me.

They could do what AMD does on Linux and rely on the openGL upstream implementation from Mesa

Nvidia’s OGL driver is a driver. Mesa’s radv backend is a driver. Nouveau, the open source Nvidia meds backend is a driver. An opengl implementation does a driver make.

There was a time they did, yes

What GPU did Microsoft’s driver target? Or are you referring to a software implementation?

Yes and No… DirectX 3D was always low-level

You literally said that Mantle was inspired by DX12, which is false. You can try to pivot to regurgitating more Mantle history, but I’m just saying…

No its not, see above…

Yes, it is, see above my disambiguation of the term “low-level”. The entire programming community has always used the term to refer to how far “above the metal” you are, not how granular an API is. The first party DX9 and DX12 drivers are equally “low-level”, take it from someone who literally wrote them for a living. The APIs themselves function very differently to give finer control over the API, and many news outlets and forums full of confused information (like this one) like to infer that that means it’s “lower level”.

Your last statement doesn’t make sense, so I don’t know how to correct it.

heartsofwar, (edited )

Nvidia’s OGL driver is a driver. Mesa’s radv backend is a driver. Nouveau, the open source Nvidia meds backend is a driver. An opengl implementation does a driver make.

No, a driver is kernel code that interfaces with hardware; Mesa’s RADV implements Vulkan and RadeonSI implements OpenGL but both sit at the user level and get called by AMDGPU (the driver in the kernel). Above the kernel at user level is simply software…

Nouveau is a driver, yes… but it is in the kernel and calls into Mesa as well…

What GPU did Microsoft’s driver target? Or are you referring to a software implementation?

You seem to be confused that Microsoft needed to develop a GPU before implementing a version of their own OpenGL… this is flawed for a couple reasons that I’ve already outlined:

  1. when OpenGL was designed, GPUs didn’t exist. Video cards existed, but a video card != GPU
  2. OpenGLs original purpose was to be a 3D CAD (Computer Aided Design) graphics API …
  3. If you’ve ever used MS Windows before Windows 95 or even Windows 95 before Direct X was released, you’d know… MS shipped their own opengl32.dll with Windows

You literally said that Mantle was inspired by DX12, which is false. You can try to pivot to regurgitating more Mantle history, but I’m just saying…

AMD Mantle was inspired by Direct X 12… it was inspired by all of Direct X and the current next gen in development at the time which was Direct X 12.

take it from someone who literally wrote them for a living

For someone of your calibre, I’d expect a better understanding of what a driver is then. “above the metal” or more commonly “bare metal” should give that first clue. implementation of OpenGL a graphics library != driver…

I will refrain from posting any further… this is going no where…

vikingtons, (edited )
@vikingtons@lemmy.world avatar

This is confusing. There are kernel and user space drivers. For example, amdgpu is the kernel driver (inclusive of KMD, DAL & several other functions like powerplay), RadeonSI / RADV / AMDVLK / OGLP (amdgpu-pro) are UMDs for 3D GFX API implementations.

Mantle was not inspired by DX at its time. It was designed as an alternative to OGL and d3d11.

LemmyHead,

Also a noob, but I think Microsoft improved low-level access in recent DX versions

Treeniks,

This is correct, while OpenGL and DirectX 11 and before are considered high level APIs, Vulkan and DirectX 12 are both considered low level APIs.

LemmyHead,

I think it’s more about portability and making it easier for windows devs to support Linux for their games

AMDIsOurLord,

OpenGL is actually older. Microsoft just spent a lot of time and money on DX adoption.

Overall, it’s the native API of Windows and that has the largest user base. On the other hand, many non-game professional apps use OpenGL/Vulkan

Squid,

Could be big. Love wine but even games with native release for Linux have wine reliance

Dremor,
@Dremor@lemmy.world avatar

I didn’t see any wine binaries in my Linux native game. Care to give a few examples?

Chobbes,

I think anything that CodeWeavers helped port. I think Bioshock Infinite is one such game. I’m not sure if you’d see wine binaries, though, could all be statically linked in.

sekhat,

This seems incorrect, if it’s running natively, it doesn’t need to rely on wine…

Chobbes,

There’s a few Linux “native” releases on steam that use compatibility layers based on wine behind the scenes, which I think is probably what they mean.

Also, this feels wrong, but… Is wine native? It’s mostly just the windows api implemented as Linux libraries. What’s the distinction that makes it “non-native” compared to other libraries? Is SDL non-native too?

teawrecks,

“Native” means “no platform specific runtime translation layers”. An app built on SDL does the translation to the final rendering API calls at compile time. But a DX app running on Linux has to do jit translation to ogl/vk when running through wine, which is just overhead.

Chobbes,

My understanding is that DXVK implements the Direct3D API using vulkan behind the scenes. So, sure, there might be a bit of overhead versus a more direct implementation. Frankly this doesn’t feel all that different from something like SDL to me. Shaders will have to be compiled into shaders that Vulcan understands, but you could just think of this as part of the front end for shader compilation.

I do agree that it feels less native to me too (particularly over the rest of wine), but it’s sort of an arbitrary distinction.

teawrecks,

An app running on SDL which targets OGL/vulkan is going through all the same levels of abstraction on windows as it is Linux. The work needed at runtime is the same regardless of platform. Therefore, we say it natively supports both platforms.

But for an app running DX, on windows the DX calls talk directly to the DX driver for the GPU which we call native, but on Linux the DX calls are translated at runtime to Vulkan calls, then the vulkan calls go to the driver which go to the hardware. There is an extra level of translation required on one platform that isn’t required on the other. So we call that non-native.

Shader compilation has its own quirks. DX apps don’t ship with hlsl, they precompile their shaders to DXIL, which is passed to the next layer. On windows, it then gets translated directly to native ISA to be executed on the GPU EUs/CUs/whatever you wanna call them. On Linux, the DXIL gets translated to spir-v, which is then passed to the vulkan driver where it is translated again to the native ISA.

But also, the native ISA can be serialized out to a file and saved so it doesn’t have to be done every time the game runs. So this is only really a problem the first time a given shader is encountered (or until you update the app or your drivers).

Finally, this extra translation of DXIL through spir-v often has to be more conservative to ensure correct behavior, which can add overhead. That is to say, even though you might be running on the same GPU, the native ISA that’s generated through both paths is unlikely to be identical, and one will likely perform better, and it’s more likely to be the DXIL->ISA path because that’s the one that gets more attention from driver devs (ex. Nvidia/amd engineers optimizing their compilers).

Chobbes,

You’re not wrong, and the translation layers definitely do make a difference for performance. Still, it’s not all that different from a slightly slow slightly odd “native” implementation of the APIs. It’s a more obvious division when it’s something like Rosetta that’s translating between entirely different ISAs.

teawrecks,

SDL isn’t adding any runtime translation overhead, that’s the difference. SDL is an abstraction layer just like UE’s RHI or the Unity Render backends. All the translation is figured out at compile time, there’s no runtime jitting instructions for the given platform.

It’s a similar situation with dynamic libraries: using a DLL or .so doesn’t mean you’re not running code natively on the CPU. But the java or .net runtimes are jiting bytecode to the CPU ISA at runtime, they are not native.

I’m sorry if I’m not explaining myself well enough, I’m not sure where the confusion still lies, but using just SDL does not make an app not-native. As a linux gamer, I would love it if more indie games used SDL since it is more than capable for most titles, and would support both windows and Linux natively.

Chobbes, (edited )

You’re explaining yourself fine, I just don’t necessarily agree with the distinction. It’s like when people say a language is “a compiled language” when that doesn’t really have much to do with the language, it’s more of an implementation detail. It’s a mostly arbitrary distinction that makes sense to talk about sometimes in practice, but it’s not necessarily meaningful philosophically.

That said, SDL isn’t really any different. It’s not translating languages, but you still have additional function calls and overhead wrapping lower level libraries, just the same as wine. DXVK has an additional problem where shaders will have to be converted to SPIR-V or something which arguable makes it “more non-native” but I think that’s not as obvious of a distinction to make too. You probably wouldn’t wouldn’t consider C code non-native, even though it’s translated to several different languages before you get native code, and usually you consider compilers that use C as a backend to be native code compilers too, so why would you consider HLSL -> SPIR-V to be any different? There’s reasons why you might make these distinctions, but my point is just that it’s more arbitrary than you might think.

teawrecks,

you still have additional function calls and overhead wrapping lower level libraries

But it all happens at compile time. That’s the difference.

You probably wouldn’t consider C code non-native

This goes back to your point above:

It’s like when people say a language is “a compiled language” when that doesn’t really have much to do with the language

C is just a language, it’s not native. Native means the binary that will execute on hardware is decided at compile time, in other words, it’s not jitted for the platform it’s running on.

usually you consider compilers that use C as a backend to be native code compilers too

I assume you’re not talking about a compiler that generates C code here, right? If it’s outputting C, then no, it’s not native code yet.

so why would you consider HLSL -> SPIR-V to be any different?

Well first off, games don’t ship with their HLSL (unlike OGL where older games DID have to ship with GLSL), they ship with DXBC/DXIL, which is the DX analog to spir-v (or, more accurately, vice versa).

Shader code is jitted on all PC platforms, yes. This is why I said above that shader code has its own quirks, but on platforms where the graphics API effectively needs to be interpreted at runtime, the shaders have to be jitted twice.

sekhat, (edited )

I’d just point out, for running an executable, wine isn’t JITting anything at least as far as I’m aware. They’ve implemented the code necessary to read .exe files and link them, and written replacements libraries for typical windows DLLs, that are implemented using typical Linux/POSIX functions. But since, in most cases, Linux and windows runs on the same target CPU instructions set most of the windows code is runnable mostly as is, with some minor shim code when jumping between Linux calling conventions to windows calling conventions and back again.

Of course, this may be different when wine isn’t running on the same target CPU as the windows executable. Then there might be JITing involved. But I’ve never tested wine in such a situation, thoughI’d expect wine to just not work in that case.

teawrecks,

Yes, the jitting is specific to the graphics APIs. DXVK is doing runtime translation from DX to VK. When possible, they are certainly just making a 1:1 call, but since the APIs aren’t orthogonal, in some cases it will need to store state and “emulate” certain API behavior using multiple VK calls. This is much more the case when translating dx9/11.

Chobbes, (edited )

But it all happens at compile time. That’s the difference.

No, when you have a library like SDL you will have functions that wrap lower level libraries for interacting with the screen and devices. At SDL’s compile time you may have preprocessor macros or whatever which select the implementation of these functions based on the platform, but at run time you still have the extra overhead of these SDL function calls when using the library. The definitions won’t be inlined, and there will be extra overhead to provide a consistent higher level interface, as it won’t exactly match the lower level APIs. It doesn’t matter if it’s compiled, there’s still overhead.

C is just a language, it’s not native. Native means the binary that will execute on hardware is decided at compile time, in other words, it’s not jitted for the platform it’s running on.

Wine doesn’t really involve any jitting, though, it’s just an implementation of the Windows APIs in the Linux userspace… So, arguably it’s as native as anything else. The main place where JIT will occur is for shader compilation in DXVK, where the results will be cached, and there is still JIT going on on the “native windows” side anyway.

If you don’t consider C code compiled to native assembly to be native, then this is all moot, and pretty much nothing is native! I agree that C is just a language so it’s not necessarily compiled down to native assembly, but if you don’t consider it native code when it is… Then what does it mean to be native?

the binary that will execute on hardware is decided at compile time

This is true for interpreted languages. The interpreter is a fixed binary that executes on hardware, and you can even bake in the program being interpreted into an executable! You could argue that control flow is determined dynamically by data stored in memory, so maybe that’s what makes it “non-native”, but this is technically true for any natively compiled binary program too :). There’s a sense in which every program that manipulates data is really just an interpreter, so why consider one to be native and not the other? Even native assembly code isn’t really what’s running on the processor due to things like microcode, and arguably speculative execution is a fancy kind of JIT that happens in hardware which essentially dynamically performs optimizations like loop unrolling… It’s more of a grey area than you might think, and nailing down a precise mathematical definition of “native code” is tricky!

I assume you’re not talking about a compiler that generates C code here, right? If it’s outputting C, then no, it’s not native code yet.

But it will be native code :). Pretty much all compilers go through several translation steps between intermediate languages, and it’s not uncommon for compilers to use C as an intermediate language, Vala does this for instance, and even compilers for languages like Haskell have done this in the past. C is a less common target these days, as many compiler front ends will spit out LLVM instead, but it’s still around. Plus, there’s often more restricted C-like languages in the middle. Haskell’s GHC still uses Cmm which is a C-like language for compilation, for example.

Well first off, games don’t ship with their HLSL (unlike OGL where older games DID have to ship with GLSL), they ship with DXBC/DXIL, which is the DX analog to spir-v (or, more accurately, vice versa).

Sure, and arguably it’s a little different to ship a lower level representation, but there will still be a compilation step for this, so you’re arguably not really introducing a new compilation step anyway, just a different one for a different backend. If you consider a binary that you get from a C compiler to be native code, why shouldn’t we consider this to be native code :)? It might not be as optimized as it could have been otherwise, but there’s plenty of native programs where that’s the case anyway, so why consider this to be any different?

Ultimately the native vs. non-native distinction doesn’t really matter, and arguably this distinction doesn’t even really exist — it’s not really easy to settle on a formal definition for this distinction that’s satisfying. The only thing that matters is performance, and people often use these things such as “it’s a compiled language” and “it has to go through fewer translation layers / layers of indirection” as a rule of thumb to guess whether something is less efficient than it could be, but it doesn’t always hold up and it doesn’t always matter. Arguably this is a case where it doesn’t really matter. There’s some overhead with wine and DXVK, but it clearly performs really well (and supposedly better in some cases), and it’s hard to truly compare because the platforms are so different in the first place, so maybe it’s all close enough anyway :).

Also to be clear, it’s not that I don’t see your points, and in a sense you’re correct! But I don’t believe these distinctions are as mathematically precise as you do, which is my main point :). Anyway, I hope you have a happy holidays!

teawrecks,

Ultimately the native vs. non-native distinction doesn’t really matter, and arguably this distinction doesn’t even really exist

Alright. Just letting you know you’re going to have a hard time communicating with people in this industry if you continue rejecting widely accepted terminology. Cheers.

Squid,

Yes this is what I meant, thank you.

Cities skylines is one example.

carl_the_grackle,

Excited to see how this plays out. Looks like there’s basically nothing implemented yet though.

Atemu, (edited )
@Atemu@lemmy.ml avatar

Why is this not being developed inside Mesa? There’s even precedent for it; gallium9.

themoonisacheese,
@themoonisacheese@sh.itjust.works avatar

Because DirectX apps typically do not only call into DirectX but also the win32 API, since DirectX has historically been a windows-only API. Merging this into mesa would only bloat mesa while not really offering support for many applications at all.

This is a great project in general, but it’s quite overshadowed by DXVK which does the same except it translates DX calls to vulkan ones and has excellent success rates in proton and derivatives. I guess this is mildly useful for systems that don’t support vulkan but want to run DX apps in raw wine or simply for people who wish not to use DXVK - competition is good for the ecosystem.

Atemu,
@Atemu@lemmy.ml avatar

Merging this into mesa would only bloat mesa while not really offering support for many applications at all.

But there already is a d3d9 driver inside mesa?

vexikron,

Oooo shit that is right up my ally!

johan,
@johan@feddit.nl avatar

*alley

Atemu,
@Atemu@lemmy.ml avatar

(Unless they have installed it onto their ASUS ROG Ally of course.)

vexikron,

Aha, oops!

Midnight posting on mobile makes one prone to spelling errors.

For what its worth my preference with hand held devices remains as the Steam Deck =P

I guess you could say I am an ally of the Ally?

SuperIce,

I don’t think there would be any real benefit to this over DXVK and VKD3D

520, (edited )

The main use case of this is in porting. So if someone wanted to make a native port of their game, this library would make it potentially much easier.

SuperIce,

But why this instead of DXVK or VKD3D? Those can just as easily be integrated.

angrymouse, (edited )

Both use wine iirc, Op is talking about applications written directly for Linux.

Edit: im wrong

SuperIce,

Wine uses VKD3D and DXVK, not the other way around. People have even used DXVK on Windows to improve performance in certain situations.

MonkderZweite,

What, DX to Vulkan translation can be faster on Windows than directly DX? How does that work?

Kekin,
@Kekin@lemy.lol avatar

I used DXVK for Dragon’s Dogma on Windows because it ran better overall, vs Directx 9 which the game uses natively.

This was on an AMD Rx 6800 xt

520, (edited )

IIRC the main DXVK dev does this for debugging purposes.

As to why it might be faster, it depends on the DX implementation and what it's being transformed into. If the original DX implementation, especially pre-DX12, is wasteful in terms of instructions, and DXVK knows of a better way to do the exact same thing in Vulkan, it can potentially offset the translation costs.

Atemu,
@Atemu@lemmy.ml avatar

There is no such thing as “directly” DX. The drivers of the major GPU vendors on Windows must also implement DX ontop of their internal abstractions over the hardware.

While Vulkan will theoretically always have more “overhead” compared to using the hardware directly in the best possible manner, the latter isn’t even close to being done anywhere as it’s not feasible.

Therefore, situations where a driver implemented atop of VK being faster than a “native” driver are absolutely possible, though not yet common. Other real-world scenarios include Mesa’s Zink atop of AMD’s Windows VK driver being much better than AMD’s “native” OpenGL driver, leading to a dev studio of an aircraft sim shipping it in a real game.

kilgore_trout,

leading to a dev studio of an aircraft sim shipping it in a real game.

Is it X-Plane?

Atemu,
@Atemu@lemmy.ml avatar
ElBarto,
@ElBarto@sh.itjust.works avatar
unionagainstdhmo,
@unionagainstdhmo@aussie.zone avatar

The APIs aren’t wildly different so it’s not so much translation but an implementation of the DirectX API. Some GPU vendors have better Vulkan drivers than DX (Intel) which may give performance improvements.

SimplyTadpole,
@SimplyTadpole@lemmy.dbzer0.com avatar

Besides speed, it’s also really useful for older games with unstable graphics renderers that don’t play nice with modern hardware. When I was still on Windows, I used DXVK on Fallout: New Vegas and Driver: Parallel Lines, and they decreased crashes by a LOT compared to when they ran on native DX9.

In terms of speed, obviously I didn’t notice much of a difference with D:PL since it’s a 2006 game that’s not demanding at all, but I did notice F:NV seemed to also run better and less laggy in general (not only is FNV poorly-optimized, but I also use a lot of graphics mods for it).

  • All
  • Subscribed
  • Moderated
  • Favorites
  • linux@lemmy.ml
  • localhost
  • All magazines
  • Loading…
    Loading the web debug toolbar…
    Attempt #

    Fatal error: Allowed memory size of 134217728 bytes exhausted (tried to allocate 18878464 bytes) in /var/www/kbin/kbin/vendor/symfony/http-kernel/Profiler/FileProfilerStorage.php on line 171

    Fatal error: Allowed memory size of 134217728 bytes exhausted (tried to allocate 10502144 bytes) in /var/www/kbin/kbin/vendor/symfony/error-handler/Resources/views/logs.html.php on line 35