Gerald Pfeifer <gerald(a)pfeifer.com> writes:
> So, I admit I don't really know this code, but looking at it (triggered
> by a warning issued by GCC development versions), I noticed that this
> variable passed by reference is not initialized here.
It's initialized when we return a type, and it doesn't need to be
initialized on NULL return. The code is correct, but you could probably
set the variable to NULL in the caller to silence the warning.
--
Alexandre Julliard
julliard(a)winehq.org
Looking at
RPC_STATUS WINAPI RpcBindingVectorFree( RPC_BINDING_VECTOR** BindingVector )
{
RPC_STATUS status;
ULONG c;
TRACE("(%p)\n", BindingVector);
for (c=0; c<(*BindingVector)->Count; c++) {
status = RpcBindingFree(&(*BindingVector)->BindingH[c]);
}
HeapFree(GetProcessHeap(), 0, *BindingVector);
*BindingVector = NULL;
return RPC_S_OK;
}
we currently always ignore the outcome of RpcBindingFree and return
RPC_S_OK.
However, there is one case where RpcBindingFree returns something
different (which is if *Binding is null when RPC_S_INVALID_BINDING
is returned).
What is the proper way of handling this? Just keeping the code as
is and removing the unused status variable? Breaking the loop once
RpcBindingFree returns something different from RPC_S_OK? Continuing
and returning the first / the last status different from RPC_S_OK?
Gerald
Here are some things I've learned about PCI-passthrough recently, which
would be one way (probably the best) to add "real hardware" to the
TestBot.
I don't want to give anyone false hopes though: this just went from
"this is a mysterious thing I need to learn about" to "I think I know
how to do it but have not tried it yet".
So graphics card PCI-passthrough is now relatively well documented on
the Internet and seems to have seen some use-cases that would indicate
it may even be reasonably usable.
* There are two machines intended to run real GPU tests for Wine:
cw1-hd6800 and cw2-gtx560. For now they are only used to run WineTest
daily on Windows 8.1, Windows 10 1507, 1709, 1809 and Linux. That's
quite a bunch but it would be much better if they were integrated with
the TestBot as that would allow developers to submit their own tests.
So I had a look at what it would imply to convert them to VM hosts
using QEmu + PCI-passthrough.
* First one needs a processor with hardware virtualisation support. For
Intel that's VT-d. Both machines have an Intel Core 2600 which
supports VT-d. Good.
* Second the motherboard too needs to support VT-d. Both machines have
an ASRock P67 Extreme4 motherboard. Unfortunately UEFI says
"unsupported" next to the "VT-d" setting for the motherboard :-( It
looks like there was some confusion as to whether the P67 chipset
supported VT-d initially. From what I gathered it's only Q67 that does
but this caused some manufacturers, among which ASRock, to initially
claim support and later retract it.
* Then one needs to add the intel_iommu=on option to the kernel command
line (resp. amd_iommu). This is should make all the PCI devices appear
in /sys/kernel/iommu_groups. But that folder remains empty which
confirms that full VT-d support is missing.
* Another important aspect is to have a graphics card which is
hot-restartable. In some cases when a VM's graphics card is crashed
the only way to reset it is to reboot the host. The TestBot is likely
to crash the graphics card, particularly if we do a hard-power off on
the VMs like we currently do, and it would relaly be annoying to have
to reboot the host everytime the graphics card goes belly up.
I don't know if the AMD HD6800 and Nvidia GTX560 are suitable but it's
quite possible they are not. All I know for now is that we should
avoid AMD's R9 line of graphics cards. I still need to find a couple
of suitable reasonably lower power graphics cards: one AMD and one
Nvidia.
* Then one needs to prevent the host from using the graphics card.
Usually that's done by having the host use the processor's IGP and
dedicating the discrete GPU to the VMs. Unfortunately the 2600's IGP
cannot be active when there's a discrete card so that route is denied
to us. Fortunately there's quite a bit of documentation on how to shut
down not just X but also the Linux virtual consoles to free the GPU
and hand it over to the VMs after boot.
Doing so means losing KVM access to the host which is a bit annoying
in case something goes wrong. So ideally we'd make sure this does not
happen in grub's "safe mode" boot option.
* Although I have not done any test yet I'm reasonably certain that
PCI-passthrough rules out live snapshots: QEmu would have no way to
restore the graphics card's internal state.
- For Windows VMs that's not an issue: if we provide a power off
snapshot the TestBot already knows how to power on the VM and wait
for it to boot (as long as the boot is shorter than the connection
timeout but it works out usually).
- For Linux VM's that's more of an issue: the TestBot will power on
the VM as usual. The problem is when it updates Wine: after
recompiling everything it deletes the old snapshot and creates a new
one from the current state of the VM, which means a live snapshot.
So the TestBot will need to be modified so it knows when and how to
power off the VM and take a powered off snapshot.
* Since the VM has full control of the graphics card QEmu has no access
to the content of the screen. That's not an issue for the normal
TestBot operation, just for the initial VM setup. Fortunately the
graphics card is connected to a KVM so the screen can be accessed
through that means. It does mean assigning the mouse and keyboard to
the VM too. Should that prove impractical there are a bunch of other
options too: VNC, LookingGlass, Synergy, etc. But the less needs to be
installed in the VMs the better.
* Also the TestBot uses QEmu to take the screenshots. But QEmu does not
have access to the content of the screen. The fix is to use a tool to
take the screenshots from within the VM and use TestAgent to retrieve
them. On Linux there are standard tools we can use. On Windows there's
code floating around we can use.
So the next steps would be:
* Maybe test on my box using the builtin IGP.
But that likely won't be very conclusive beyond confirming the
snapshot issues, screen access, etc.
* Find a suitable AMD or Nvidia graphics card and test that on my box.
That would allow me to fully test integration with the TestBot, check
for stability issues, etc.
* Then see what can be done with the existing cw1 and cw2 boxes.
--
Francois Gouget <fgouget(a)codeweavers.com>
Hi all,
Last night Martin pushed an update to llvm-mingw bumping version of LLVM
to a commit that includes a number of fixes for Wine. See [1] for
details. Thank you, Martin! Meantime, Wine got required fixes, so that
it all should mostly work together. If you want to try it, just clone
[2] git and run:
DEFAULT_MSVCRT=msvcrt-os ./build-all.sh /path/to/install
If the installation is on PATH, current Wine should be able to use it
without any additional tweaks. You should be able to configure it just
like configuration on GCC-based mingw works.
DEFAULT_MSVCRT=msvcrt-os part is needed because Wine can't deal with
mingw-w64 defaulting to crt version other than msvcrt.dll. This is not a
problem specific to LLVM, we will hit the same problem on GCC if
mingw-w64 is configured to use other crt (usually ucrt, things like
msvcrt100 is also possible). It is not yet a popular setup, but it will
probably be more popular over time, so it would be great to have it
supported. The ultimate solution for Wine is to always use
-nodefaultlibs for all its binaries. It's already the case for all Wine
builtin DLLs, we just need to do the same for EXEs. I have some
unfinished patches for that, but it's not something appropriate for code
freeze. I'm experimenting with a smaller fix, because it would be great
to have something sooner, but using DEFAULT_MSVCRT=msvcrt-os is required
for now.
One of nice LLVM features is support for PDB files. If you want to make
a build with PDB files, configure Wine like this:
configure CROSSCFLAGS="-g -gcodeview -O2" CROSSLDFLAGS="-Wl,-pdb="
#append your usual args
and then run make like:
make CROSSLDFLAGS="-Wl,-pdb="
The additional make argument is needed because Wine does not yet
propagate CROSSLDFLAGS from configure. Patch [3] should fix it.
Cheers,
Jacek
[1]
https://github.com/mstorsjo/llvm-mingw/commit/056c1f5cd22b1c5ca76af38f2d1f9…
[2] https://github.com/mstorsjo/llvm-mingw
[3] https://source.winehq.org/patches/data/176054
So, I have this idea of replacing Winetricks with something more beautiful,
responsive and functional. I have been developing Wineglass independently
as an elementaryOS app (https://github.com/aggalex/Wineglass) and I want to
expand it and integrate it with wine in a much better way. If possible,
could you inform me about wine's GSoC program? Do you believe such a
project is suitable for wine and GSoC?
This patchset implements support for DXIL shaders (SM 6.0+) via dxil-spirv
(github.com/HansKristian-Work/dxil-spirv).
There are three main parts of this patchset.
First, support for SM 5.1 register spaces are introduced. These patches
have been submitted before, but it was only reviewed after a rebase, and
that rebase broke the patches due to some uninitialized member variables
introduced in the UAV clear patch set which came inbetween. I've fixed
those, and cleaned up the patches a little bit. I need SM 5.1 register spaces
implemented because the DXIL implementation needs it as well.
I also fixed some additional cases for SM 5.1 which were not implemented
in the last patch set, like SM 5.1 root constants and root descriptors.
To aid debugging, I also added SPIR-V dumping to the
VKD3D_SHADER_DUMP_PATH which was very useful to study difference between
DXBC and DXIL outputs.
Second, we have the dxil-spirv integration. There are various small
refactors needed to enable this, mostly just moving a few helper
functions in vkd3d-shader around to the private header so they can be
accessed by dxil.c. Vulkan 1.1 is enabled if active,
because subgroup operations requires it. SM 6.0 support is only activated
if subgroup operations are sufficiently supported and DXIL is enabled.
There are other features required for SM 6.0 such as 16-bit
arithmetic and storage, but that is left for later. We will need to
revisit the binding model to enable that properly.
The actual DXIL implementation lives in dxil.c. dxbc.c will detect that
a shader blob is DXIL by looking for TAG_DXIL and dispatch the work to
dxil.c if DXIL support is enabled. DXIL blobs are basically identical to
DXBC blobs, except TAG_DXIL is used instead of TAG_SHDR, and ISG1 is
used instead of ISGN, etc.
To make integration as smooth as possible, dxil-spirv relies on
callbacks rather than feeding structures to the compiler to resolve
resource bindings and similar. This way, we won't have to translate the
vkd3d_shader_* structures.
Finally, most of the commits here are to add DXIL testing paths for many
of the tests in tests/d3d12.c. The dxil-spirv repo has a lot of tests
already to cover codegen, so these tests are mostly to verify that the
integration works. Adding these tests *did* find some bugs in the
dxil-spirv implementation as well, but it mostly went without any major
problems.
When emitting DXIL, the blob sizes are not word-aligned, so they are
embedded as byte arrays instead. They are also much larger than
equivalent DXBC blobs, due to LLVM IR bloat. I ran -Qstrip_debug
-Qstrip_reflect in DXC when compiling these shaders.
The main strategy I used to add DXIL tests was to turn the existing
tests into a function ala static void test_something(bool use_dxil) and
just select the right shader code as required.
I had to add some utility functions as well because the D3D12 validation
layers refuse to mix and match SM 5.1 and below with SM 6.0 and above.
These utilities just use different default shader blobs.
Another note is that to make the DXIL shaders pass validation on
native D3D12 it is required that the DXIL is signed by Microsoft's validator.
Hans-Kristian Arntzen (41):
vkd3d: Deal correctly with SM 5.1 register spaces.
vkd3d: Add test case for SM 5.1 register spaces.
vkd3d: Add test case for root constants in SM 5.1.
vkd3d-shader: Add path for debug dumping SPIR-V as well.
vkd3d: Add dxil-spirv to autoconf
vkd3d: Attempt to parse ISG1 as well when parsing input signatures.
vkd3d-shader: Add entry point to query if DXIL is supported.
vkd3d: Attempt to create a Vulkan 1.1 instance and device.
vkd3d: Query subgroup properties and expose SM 6.0 if present.
vkd3d: Move vkd3d_find_shader into private header.
vkd3d: Add helper function to query if a blob is DXIL.
vkd3d-shader: Expose debug shader dumping in private header.
vkd3d-shader: Add integration for DXIL shaders.
vkd3d: Add test helper function to determine if DXIL is supported.
vkd3d: Add helper test function to set up a default pipeline with
DXIL.
vkd3d: Add DXIL test for geometry shader.
vkd3d: Add DXIL test for layered rendering.
vkd3d: Add DXIL test for ps_layer.
vkd3d: Add DXIL test for quad_tessellation.
vkd3d: Add DXIL test for tess control point phase.
vkd3d: Add DXIL test for tess fork phase.
vkd3d: Add DXIL test for line tessellation.
vkd3d: Add DXIL test for stream output.
vkd3d: Add DXIL test for bufinfo.
vkd3d: Add DXIL test for register spaces.
vkd3d: Add DXIL test for constant buffers (root const/desc).
vkd3d: Add DXIL test for dual source blending.
vkd3d: Add DXIL test for face culling.
vkd3d: Add DXIL test for render_target_a8.
vkd3d-shader: Hook up RT output swizzle path in DXIL.
vkd3d: Add DXIL test for sample mask.
vkd3d: Add DXIL test for coverage.
vkd3d: Add create_pipeline_state_dxil test utility.
vkd3d: Add DXIL test for shader_sample_position.
vkd3d: Add DXIL test for rasterizer sample count.
vkd3d-shader: Hook up RASTERIZER_SAMPLE_COUNT parameter.
vkd3d: Add DXIL test for clip distance.
vkd3d: Add DXIL test for combined ClipCull.
vkd3d: Add DXIL test for eval attribute.
vkd3d: Add DXIL test for instance_id.
vkd3d: Add DXIL test for vertex ID.
Makefile.am | 8 +-
configure.ac | 11 +
include/vkd3d_shader.h | 18 +
libs/vkd3d-shader/dxbc.c | 58 +-
libs/vkd3d-shader/dxil.c | 470 ++
libs/vkd3d-shader/spirv.c | 215 +-
libs/vkd3d-shader/vkd3d_shader.map | 1 +
libs/vkd3d-shader/vkd3d_shader_main.c | 99 +-
libs/vkd3d-shader/vkd3d_shader_private.h | 38 +
libs/vkd3d/command.c | 32 +-
libs/vkd3d/device.c | 69 +-
libs/vkd3d/state.c | 65 +-
libs/vkd3d/utils.c | 3 +
libs/vkd3d/vkd3d_private.h | 5 +
tests/d3d12.c | 5983 +++++++++++++++++++---
tests/d3d12_test_utils.h | 193 +-
16 files changed, 6544 insertions(+), 724 deletions(-)
create mode 100644 libs/vkd3d-shader/dxil.c
--
2.25.0
Hello wine-devel,
Looking at https://bugs.winehq.org/show_bug.cgi?id=35009, the problem is that
windows uses completely different sorting weight tables from the official
unicode version.
However, MS disclosed them under their "Open Specifications" program:
https://docs.microsoft.com/en-us/openspecs/windows_protocols/ms-ucoderef/
(The tables are as download available, under the same license, according to MS
support)
If we could use that, we should be able to produce the exact same sorting
windows has. And to properly solve those unicode bugs, that's what we need.
However, two questions arise:
1) If we could, would we even want to use those tables?
2) Is the license of that content compatible with Wine?
Your thoughts on this? Question 1) is probably the easiest. The copyright
notice can be found inside the PDF FWIW, but I don't really understand laywer-
speak.
Regards,
Fabian Maurer