[PATCH vkd3d 09/17] vkd3d-shader/hlsl: Replace register offsets with index paths in load initializations.

Francisco Casas fcasas at codeweavers.com
Thu Jul 14 20:23:51 CDT 2022


The transform_deref_paths_into_offsets pass turns these index paths back
into register offsets.

Signed-off-by: Francisco Casas <fcasas at codeweavers.com>
---

The idea is that we can move the transform_deref_paths_into_offsets()
pass forward as we translate more passes to work with index paths.
This, until register offsets can be totally removed, after we implement
the SMxIRs and their translations.

The aim is to have 3 ways of initializing load/store nodes when using index
paths:

* One that initializes the node from a another's node deref and and
  optional index to be appended to that deref's path.
* One that initializes the node from a deref and the index
  of a single basic component within it. This one also generates constant
  nodes for the required path, so it also initializes an instruction block
  whose instructions must be inserted in the instruction list.
* One that initializes the node directly for a whole variable. These functions
  are already present: hlsl_new_var_load() and hlsl_new_simple_store().

The signatures of these functions were placed nearby in hlsl.h

It is worth noting that the use of index paths allows to remove the data type
argument when initializing store/loads because it can now be deducted from the
variable and the hlsl_deref.

Applying an index over a matrix derefence retrieves a vector. If the matrix
is row_major, this corresponds to a row, otherwise, it corresponds to a
column. So, the code should take matrix majority into account, at least until
the split_matrix_copies pass.

The first index in a path after a loading a struct should be an
hlsl_ir_constant, since the field that's being addressed is always
known at parse-time.

hlsl_init_simple_deref_from_var() can be used to initialize a deref that can
be passed by reference to the load and store initialization functions.
This value shall not be modified after being created and does not
require to call hlsl_cleanup_deref().

---

v2 (including changes in splits of this patch):
* cleanup_deref() had to be made non-static in this commit because this
  is the commit where it is used outside hlsl.c. It also was renamed to
  hlsl_cleanup_deref() to follow the convention.
* hlsl_new_load_index(), hlsl_new_load_component() and
  hlsl_new_store_index() now receive a "const struct
  vkd3d_shader_location *loc".
* Removed braces from switch cases without declarations.
* hlsl_compute_component_path() made static and no longer used to
  retrieve the componet type. The "hlsl_" prefix was removed.
* Using Zeb's strategy for handling recursion in compute_component_path();
  implemented in subtype_index_from_component_index().
* Added hlsl_type_get_component_type() to retrieve the component type.
* Allocating at least 1 byte in compute_component_path(), to ensure that
  NULL is never returned on success.
* Using 'assert(index < type->dimx)' instead of
  'assert(index < type->dimx * type->dimy)' in HLSL_CLASS_VECTOR case in
  path function.
* Restored 'assert(array_index < type->e.array.elements_count)'.
* Added 'assert(!other->offset.node)' in deref_copy() because it is not
  indended to copy derefs that use nodes. Naturally, the assertion must be
  removed once we remove the offset field from hlsl_deref.
* Using memset to initialize deref in deref_copy().
* Using path_len instead of 'i' for calling hlsl_src_from_node() in
  hlsl_new_load_index/hlsl_new_store_index 'if (idx)' bodies.
* Removed double newline in hlsl_new_load_component().
* Using list_move_before() instead of iterating instruction list in
  replace_deref_path_with_offset().
* Renamed hlsl_get_direct_var_deref() to
  hlsl_init_simple_deref_from_var(), passing deref by pointer.
* The introduction of hlsl_new_store_index() was moved to another patch.
* The introduction of hlsl_new_store_component() mas moved to another
  patch.
* The translation of hlsl_new_resource_load() to register offsets was
  split from this patch.
* hlsl_new_offset_node_from_deref() was implemented in order to create
  new offsets in hlsl.y from derefs, in order to be used as input for
  stores and resource loads that will still require offsets (at least
  until the following patches).
  his function is also used in replace_deref_path_with_offset(), to make
  the implementation smaller.
* Added assertions to hlsl_new_store_index() and hlsl_new_load_index()
  to ensure that they don't receive derefs with register offset instead
  of index paths.

---

Zeb:

Regarding the possibility of having a hlsl_deref_from_component_index() function, there are 2 inconveniences:
1. It is necessary to create constant nodes for the path, so, an instruction block pointer is required.
2. An additional hlsl_deref pointer is required for its use in hlsl_new_store_component(), since the lhs path needs to be prepend to the path to the component [1].

The signature of the function would end up like this:

static bool hlsl_deref_from_component_index(struct hlsl_ctx *ctx, struct hlsl_block *block,
        struct hlsl_deref *deref, struct hlsl_ir_var *var, unsigned int index,
        struct hlsl_deref *path_prefix, const struct vkd3d_shader_location *loc)

So in the end I think it is not a good idea to have hlsl_deref_from_component_index() directly.

However, subtype_index_from_component_index() worked nicely; it allowed to create a simple implementation for hlsl_compute_component_path() and creating
hlsl_type_get_component_type().
Furthermore, given that the latter function allows to obtain the component data type easily, hlsl_compute_component_path() is no longer required to retrieve the component type and can be static now.

I ended up replacing

struct hlsl_deref hlsl_get_direct_var_deref(struct hlsl_ir_var *var);

with:

void hlsl_init_simple_deref_from_var(struct hlsl_deref *deref, struct hlsl_ir_var *var);

as you suggested, but adding the "init" to the name to avoid it being solely a noun phrase.

I decided not to include the cast in hlsl_new_store_component() because now it is only required in one place and it would be inconsistent with hlsl_new_store_index() that doesn't.

While I could separate hlsl_new_store_index() and hlsl_new_load_component(), hlsl_new_load_component() and hlsl_new_load_index() must both be introduced in the same patch because they both expect dereferences with path indexes instead of offsets (which come from from other loads).
Converting from dereferences with offsets to index paths is hard and I don't think it is worth implementing just to split more this patch (unlike converting the other way around).

[1] There is also the alternative of introducing a hlsl_deref_concat() function that takes two derefs and retrieves a new one.

Signed-off-by: Francisco Casas <fcasas at codeweavers.com>
---
 libs/vkd3d-shader/hlsl.c         | 305 ++++++++++++++++++++++++++++++-
 libs/vkd3d-shader/hlsl.h         |  35 +++-
 libs/vkd3d-shader/hlsl.y         | 102 +++++------
 libs/vkd3d-shader/hlsl_codegen.c |  44 +++++
 4 files changed, 417 insertions(+), 69 deletions(-)

diff --git a/libs/vkd3d-shader/hlsl.c b/libs/vkd3d-shader/hlsl.c
index d3ceba35..4b01cca5 100644
--- a/libs/vkd3d-shader/hlsl.c
+++ b/libs/vkd3d-shader/hlsl.c
@@ -330,6 +330,124 @@ unsigned int hlsl_compute_component_offset(struct hlsl_ctx *ctx, struct hlsl_typ
     return 0;
 }
 
+static bool type_is_single_component(const struct hlsl_type *type)
+{
+    return type->type == HLSL_CLASS_SCALAR || type->type == HLSL_CLASS_OBJECT;
+}
+
+/* Given a type and a component index, retrieves next path index required to reach the component.
+ * *typep will be set to the subtype within the original type that contains the component.
+ * *indexp will be set to the index of the component within *typep.
+ */
+static unsigned int subtype_index_from_component_index(struct hlsl_ctx *ctx,
+        struct hlsl_type **typep, unsigned int *indexp)
+{
+    struct hlsl_type *type = *typep;
+    unsigned int index = *indexp;
+
+    assert(!type_is_single_component(type));
+    assert(index < hlsl_type_component_count(type));
+
+    switch (type->type)
+    {
+        case HLSL_CLASS_VECTOR:
+            assert(index < type->dimx);
+            *typep = hlsl_get_scalar_type(ctx, type->base_type);
+            *indexp = 0;
+            return index;
+
+        case HLSL_CLASS_MATRIX:
+        {
+            unsigned int y = index / type->dimx, x = index % type->dimx;
+            bool row_major = hlsl_type_is_row_major(type);
+
+            assert(index < type->dimx * type->dimy);
+            *typep = hlsl_get_vector_type(ctx, type->base_type, row_major? type->dimx : type->dimy);
+            *indexp = row_major? x : y;
+            return row_major? y : x;
+        }
+
+        case HLSL_CLASS_ARRAY:
+        {
+            unsigned int elem_comp_count = hlsl_type_component_count(type->e.array.type);
+            unsigned int array_index;
+
+            *typep = type->e.array.type;
+            *indexp = index % elem_comp_count;
+            array_index = index / elem_comp_count;
+            assert(array_index < type->e.array.elements_count);
+            return array_index;
+        }
+
+        case HLSL_CLASS_STRUCT:
+        {
+            struct hlsl_struct_field *field;
+            unsigned int field_comp_count, i;
+
+            for (i = 0; i < type->e.record.field_count; ++i)
+            {
+                field = &type->e.record.fields[i];
+                field_comp_count = hlsl_type_component_count(field->type);
+                if (index < field_comp_count)
+                {
+                    *typep = field->type;
+                    *indexp = index;
+                    return i;
+                }
+                index -= field_comp_count;
+            }
+            assert(0);
+            return 0;
+        }
+
+        default:
+            assert(0);
+            return 0;
+    }
+}
+
+struct hlsl_type *hlsl_type_get_component_type(struct hlsl_ctx *ctx, struct hlsl_type *type,
+        unsigned int index)
+{
+    while (!type_is_single_component(type))
+        subtype_index_from_component_index(ctx, &type, &index);
+
+    return type;
+}
+
+/* Returns the path of a given component within a type, given its index.
+ * *path_len will be set to the lenght of the path.
+ * Memory should be free afterwards.
+ */
+static unsigned int *compute_component_path(struct hlsl_ctx *ctx, struct hlsl_type *type,
+        unsigned int index, unsigned int *path_len)
+{
+    struct hlsl_type *path_type;
+    unsigned int *path, path_index;
+
+    *path_len = 0;
+    path_type = type;
+    path_index = index;
+    while (!type_is_single_component(path_type))
+    {
+        subtype_index_from_component_index(ctx, &path_type, &path_index);
+        ++*path_len;
+    }
+    if (!(path = hlsl_alloc(ctx, *path_len * sizeof(unsigned int) + 1)))
+        return NULL;
+
+    *path_len = 0;
+    path_type = type;
+    path_index = index;
+    while (!type_is_single_component(path_type))
+    {
+        path[*path_len] = subtype_index_from_component_index(ctx, &path_type, &path_index);
+        ++*path_len;
+    }
+
+    return path;
+}
+
 struct hlsl_type *hlsl_get_type_from_path_index(struct hlsl_ctx *ctx, const struct hlsl_type *type,
         struct hlsl_ir_node *node)
 {
@@ -433,6 +551,37 @@ struct hlsl_ir_node *hlsl_new_offset_from_path_index(struct hlsl_ctx *ctx, struc
     return idx_offset;
 }
 
+struct hlsl_ir_node *hlsl_new_offset_node_from_deref(struct hlsl_ctx *ctx, struct hlsl_block *block,
+        const struct hlsl_deref *deref, const struct vkd3d_shader_location *loc)
+{
+    struct hlsl_ir_node *offset = NULL;
+    struct hlsl_type *type;
+    unsigned int i;
+
+    list_init(&block->instrs);
+
+    if (deref->offset.node)
+        return deref->offset.node;
+
+    assert(deref->var);
+
+    type = deref->var->data_type;
+
+    for (i = 0; i < deref->path_len; ++i)
+    {
+        struct hlsl_block idx_block;
+
+        if (!(offset = hlsl_new_offset_from_path_index(ctx, &idx_block, type, offset, deref->path[i].node, loc)))
+            return NULL;
+
+        list_move_tail(&block->instrs, &idx_block.instrs);
+
+        type = hlsl_get_type_from_path_index(ctx, type, deref->path[i].node);
+    }
+
+    return offset;
+}
+
 struct hlsl_type *hlsl_new_array_type(struct hlsl_ctx *ctx, struct hlsl_type *basic_type, unsigned int array_size)
 {
     struct hlsl_type *type;
@@ -523,7 +672,7 @@ struct hlsl_ir_function_decl *hlsl_get_func_decl(struct hlsl_ctx *ctx, const cha
     return NULL;
 }
 
-unsigned int hlsl_type_component_count(struct hlsl_type *type)
+unsigned int hlsl_type_component_count(const struct hlsl_type *type)
 {
     unsigned int count = 0, i;
 
@@ -740,8 +889,50 @@ static bool type_is_single_reg(const struct hlsl_type *type)
     return type->type == HLSL_CLASS_SCALAR || type->type == HLSL_CLASS_VECTOR;
 }
 
-static void cleanup_deref(struct hlsl_deref *deref)
+static bool init_deref(struct hlsl_ctx *ctx, struct hlsl_deref *deref, struct hlsl_ir_var *var,
+        unsigned int path_len)
+{
+    deref->var = var;
+    deref->path_len = path_len;
+    deref->offset.node = NULL;
+
+    if (path_len == 0)
+    {
+        deref->path = NULL;
+        return true;
+    }
+
+    if (!(deref->path = hlsl_alloc(ctx, sizeof(*deref->path) * deref->path_len)))
+    {
+        deref->var = NULL;
+        deref->path_len = 0;
+        return false;
+    }
+
+    return true;
+}
+
+static struct hlsl_type *get_type_from_deref(struct hlsl_ctx *ctx, const struct hlsl_deref *deref)
+{
+    struct hlsl_type *type = deref->var->data_type;
+    unsigned int i;
+
+    for (i = 0; i < deref->path_len; ++i)
+        type = hlsl_get_type_from_path_index(ctx, type, deref->path[i].node);
+    return type;
+}
+
+void hlsl_cleanup_deref(struct hlsl_deref *deref)
 {
+    unsigned int i;
+
+    for (i = 0; i < deref->path_len; ++i)
+        hlsl_src_remove(&deref->path[i]);
+    vkd3d_free(deref->path);
+
+    deref->path = NULL;
+    deref->path_len = 0;
+
     hlsl_src_remove(&deref->offset);
 }
 
@@ -757,13 +948,21 @@ struct hlsl_ir_store *hlsl_new_store(struct hlsl_ctx *ctx, struct hlsl_ir_var *v
         return NULL;
 
     init_node(&store->node, HLSL_IR_STORE, NULL, loc);
-    store->lhs.var = var;
+    init_deref(ctx, &store->lhs, var, 0);
     hlsl_src_from_node(&store->lhs.offset, offset);
     hlsl_src_from_node(&store->rhs, rhs);
     store->writemask = writemask;
     return store;
 }
 
+/* Initializes a simple variable derefence, so that a pointer to it can be passed to load/store
+ * functions. The deref shall not be modified afterwards. */
+void hlsl_init_simple_deref_from_var(struct hlsl_deref *deref, struct hlsl_ir_var *var)
+{
+    memset(deref, 0, sizeof(*deref));
+    deref->var = var;
+}
+
 struct hlsl_ir_store *hlsl_new_simple_store(struct hlsl_ctx *ctx, struct hlsl_ir_var *lhs, struct hlsl_ir_node *rhs)
 {
     return hlsl_new_store(ctx, lhs, NULL, rhs, 0, rhs->loc);
@@ -860,15 +1059,101 @@ struct hlsl_ir_load *hlsl_new_load(struct hlsl_ctx *ctx, struct hlsl_ir_var *var
     if (!(load = hlsl_alloc(ctx, sizeof(*load))))
         return NULL;
     init_node(&load->node, HLSL_IR_LOAD, type, loc);
-    load->src.var = var;
+    init_deref(ctx, &load->src, var, 0);
     hlsl_src_from_node(&load->src.offset, offset);
     return load;
 }
 
+struct hlsl_ir_load *hlsl_new_load_index(struct hlsl_ctx *ctx, const struct hlsl_deref *deref,
+        struct hlsl_ir_node *idx, const struct vkd3d_shader_location *loc)
+{
+    struct hlsl_ir_load *load;
+    struct hlsl_type *type;
+    unsigned int i;
+
+    assert(!deref->offset.node);
+
+    type = get_type_from_deref(ctx, deref);
+    if (idx)
+        type = hlsl_get_type_from_path_index(ctx, type, idx);
+
+    if (!(load = hlsl_alloc(ctx, sizeof(*load))))
+        return NULL;
+    init_node(&load->node, HLSL_IR_LOAD, type, *loc);
+
+    if (!init_deref(ctx, &load->src, deref->var, deref->path_len + !!idx))
+    {
+        vkd3d_free(load);
+        return NULL;
+    }
+    for (i = 0; i < deref->path_len; ++i)
+        hlsl_src_from_node(&load->src.path[i], deref->path[i].node);
+    if (idx)
+        hlsl_src_from_node(&load->src.path[deref->path_len], idx);
+
+    return load;
+}
+
 struct hlsl_ir_load *hlsl_new_var_load(struct hlsl_ctx *ctx, struct hlsl_ir_var *var,
-        const struct vkd3d_shader_location loc)
+        struct vkd3d_shader_location loc)
 {
-    return hlsl_new_load(ctx, var, NULL, var->data_type, loc);
+    struct hlsl_deref var_deref;
+
+    hlsl_init_simple_deref_from_var(&var_deref, var);
+    return hlsl_new_load_index(ctx, &var_deref, NULL, &loc);
+}
+
+struct hlsl_ir_load *hlsl_new_load_component(struct hlsl_ctx *ctx, struct hlsl_block *block,
+        const struct hlsl_deref *deref, unsigned int comp, const struct vkd3d_shader_location *loc)
+{
+    struct hlsl_type *type, *comp_type;
+    unsigned int *path, path_len, i;
+    struct hlsl_ir_constant *c;
+    struct hlsl_ir_load *load;
+
+    list_init(&block->instrs);
+
+    type = get_type_from_deref(ctx, deref);
+    path = compute_component_path(ctx, type, comp, &path_len);
+    if (!path)
+        return NULL;
+
+    if (!(load = hlsl_alloc(ctx, sizeof(*load))))
+    {
+        vkd3d_free(path);
+        return NULL;
+    }
+    comp_type = hlsl_type_get_component_type(ctx, type, comp);
+    init_node(&load->node, HLSL_IR_LOAD, comp_type, *loc);
+
+    if (!init_deref(ctx, &load->src, deref->var, deref->path_len + path_len))
+    {
+        vkd3d_free(path);
+        vkd3d_free(load);
+        return NULL;
+    }
+
+    for (i = 0; i < deref->path_len; ++i)
+        hlsl_src_from_node(&load->src.path[i], deref->path[i].node);
+    for (i = 0; i < path_len; ++i)
+    {
+        if (!(c = hlsl_new_uint_constant(ctx, path[i], loc)))
+        {
+            vkd3d_free(path);
+            hlsl_free_instr_list(&block->instrs);
+            hlsl_cleanup_deref(&load->src);
+            vkd3d_free(load);
+            return NULL;
+        }
+        list_add_tail(&block->instrs, &c->node.entry);
+
+        hlsl_src_from_node(&load->src.path[deref->path_len + i], &c->node);
+    }
+    vkd3d_free(path);
+
+    list_add_tail(&block->instrs, &load->node.entry);
+
+    return load;
 }
 
 struct hlsl_ir_resource_load *hlsl_new_resource_load(struct hlsl_ctx *ctx, struct hlsl_type *data_type,
@@ -1703,7 +1988,7 @@ static void free_ir_jump(struct hlsl_ir_jump *jump)
 
 static void free_ir_load(struct hlsl_ir_load *load)
 {
-    cleanup_deref(&load->src);
+    hlsl_cleanup_deref(&load->src);
     vkd3d_free(load);
 }
 
@@ -1716,8 +2001,8 @@ static void free_ir_loop(struct hlsl_ir_loop *loop)
 static void free_ir_resource_load(struct hlsl_ir_resource_load *load)
 {
     hlsl_src_remove(&load->coords);
-    cleanup_deref(&load->sampler);
-    cleanup_deref(&load->resource);
+    hlsl_cleanup_deref(&load->sampler);
+    hlsl_cleanup_deref(&load->resource);
     hlsl_src_remove(&load->texel_offset);
     vkd3d_free(load);
 }
@@ -1725,7 +2010,7 @@ static void free_ir_resource_load(struct hlsl_ir_resource_load *load)
 static void free_ir_store(struct hlsl_ir_store *store)
 {
     hlsl_src_remove(&store->rhs);
-    cleanup_deref(&store->lhs);
+    hlsl_cleanup_deref(&store->lhs);
     vkd3d_free(store);
 }
 
diff --git a/libs/vkd3d-shader/hlsl.h b/libs/vkd3d-shader/hlsl.h
index 546c87f3..3a6af2a3 100644
--- a/libs/vkd3d-shader/hlsl.h
+++ b/libs/vkd3d-shader/hlsl.h
@@ -374,6 +374,10 @@ struct hlsl_ir_swizzle
 struct hlsl_deref
 {
     struct hlsl_ir_var *var;
+
+    unsigned int path_len;
+    struct hlsl_src *path;
+
     struct hlsl_src offset;
 };
 
@@ -717,6 +721,8 @@ void hlsl_dump_function(struct hlsl_ctx *ctx, const struct hlsl_ir_function_decl
 int hlsl_emit_bytecode(struct hlsl_ctx *ctx, struct hlsl_ir_function_decl *entry_func,
         enum vkd3d_shader_target_type target_type, struct vkd3d_shader_code *out);
 
+void hlsl_cleanup_deref(struct hlsl_deref *deref);
+
 void hlsl_replace_node(struct hlsl_ir_node *old, struct hlsl_ir_node *new);
 
 void hlsl_free_instr(struct hlsl_ir_node *node);
@@ -751,14 +757,29 @@ struct hlsl_ir_if *hlsl_new_if(struct hlsl_ctx *ctx, struct hlsl_ir_node *condit
 struct hlsl_ir_constant *hlsl_new_int_constant(struct hlsl_ctx *ctx, int n,
         const struct vkd3d_shader_location *loc);
 struct hlsl_ir_jump *hlsl_new_jump(struct hlsl_ctx *ctx, enum hlsl_ir_jump_type type, struct vkd3d_shader_location loc);
-struct hlsl_ir_load *hlsl_new_load(struct hlsl_ctx *ctx, struct hlsl_ir_var *var, struct hlsl_ir_node *offset,
-        struct hlsl_type *type, struct vkd3d_shader_location loc);
-struct hlsl_ir_loop *hlsl_new_loop(struct hlsl_ctx *ctx, struct vkd3d_shader_location loc);
+
+void hlsl_init_simple_deref_from_var(struct hlsl_deref *deref, struct hlsl_ir_var *var);
+
+struct hlsl_ir_load *hlsl_new_var_load(struct hlsl_ctx *ctx, struct hlsl_ir_var *var,
+        struct vkd3d_shader_location loc);
+struct hlsl_ir_load *hlsl_new_load_index(struct hlsl_ctx *ctx, const struct hlsl_deref *deref,
+        struct hlsl_ir_node *idx, const struct vkd3d_shader_location *loc);
+struct hlsl_ir_load *hlsl_new_load_component(struct hlsl_ctx *ctx, struct hlsl_block *block,
+        const struct hlsl_deref *deref, unsigned int comp, const struct vkd3d_shader_location *loc);
+
+struct hlsl_ir_store *hlsl_new_simple_store(struct hlsl_ctx *ctx, struct hlsl_ir_var *lhs, struct hlsl_ir_node *rhs);
+
+struct hlsl_ir_node *hlsl_new_offset_node_from_deref(struct hlsl_ctx *ctx, struct hlsl_block *block,
+        const struct hlsl_deref *deref, const struct vkd3d_shader_location *loc);
+
 struct hlsl_ir_resource_load *hlsl_new_resource_load(struct hlsl_ctx *ctx, struct hlsl_type *data_type,
         enum hlsl_resource_load_type type, struct hlsl_ir_var *resource, struct hlsl_ir_node *resource_offset,
         struct hlsl_ir_var *sampler, struct hlsl_ir_node *sampler_offset, struct hlsl_ir_node *coords,
         struct hlsl_ir_node *texel_offset, const struct vkd3d_shader_location *loc);
-struct hlsl_ir_store *hlsl_new_simple_store(struct hlsl_ctx *ctx, struct hlsl_ir_var *lhs, struct hlsl_ir_node *rhs);
+
+struct hlsl_ir_load *hlsl_new_load(struct hlsl_ctx *ctx, struct hlsl_ir_var *var, struct hlsl_ir_node *offset,
+        struct hlsl_type *type, struct vkd3d_shader_location loc);
+struct hlsl_ir_loop *hlsl_new_loop(struct hlsl_ctx *ctx, struct vkd3d_shader_location loc);
 struct hlsl_ir_store *hlsl_new_store(struct hlsl_ctx *ctx, struct hlsl_ir_var *var, struct hlsl_ir_node *offset,
         struct hlsl_ir_node *rhs, unsigned int writemask, struct vkd3d_shader_location loc);
 struct hlsl_type *hlsl_new_struct_type(struct hlsl_ctx *ctx, const char *name,
@@ -775,8 +796,6 @@ struct hlsl_ir_node *hlsl_new_unary_expr(struct hlsl_ctx *ctx, enum hlsl_ir_expr
 struct hlsl_ir_var *hlsl_new_var(struct hlsl_ctx *ctx, const char *name, struct hlsl_type *type,
         const struct vkd3d_shader_location loc, const struct hlsl_semantic *semantic, unsigned int modifiers,
         const struct hlsl_reg_reservation *reg_reservation);
-struct hlsl_ir_load *hlsl_new_var_load(struct hlsl_ctx *ctx, struct hlsl_ir_var *var,
-        const struct vkd3d_shader_location loc);
 
 void hlsl_error(struct hlsl_ctx *ctx, const struct vkd3d_shader_location *loc,
         enum vkd3d_shader_error error, const char *fmt, ...) VKD3D_PRINTF_FUNC(4, 5);
@@ -794,10 +813,12 @@ bool hlsl_scope_add_type(struct hlsl_scope *scope, struct hlsl_type *type);
 
 struct hlsl_type *hlsl_type_clone(struct hlsl_ctx *ctx, struct hlsl_type *old,
         unsigned int default_majority, unsigned int modifiers);
-unsigned int hlsl_type_component_count(struct hlsl_type *type);
+unsigned int hlsl_type_component_count(const struct hlsl_type *type);
 unsigned int hlsl_type_get_array_element_reg_size(const struct hlsl_type *type);
 unsigned int hlsl_compute_component_offset(struct hlsl_ctx *ctx, struct hlsl_type *type,
         unsigned int idx, struct hlsl_type **comp_type);
+struct hlsl_type *hlsl_type_get_component_type(struct hlsl_ctx *ctx, struct hlsl_type *type,
+        unsigned int index);
 bool hlsl_type_is_row_major(const struct hlsl_type *type);
 unsigned int hlsl_type_minor_size(const struct hlsl_type *type);
 unsigned int hlsl_type_major_size(const struct hlsl_type *type);
diff --git a/libs/vkd3d-shader/hlsl.y b/libs/vkd3d-shader/hlsl.y
index a1d39140..3447ddd5 100644
--- a/libs/vkd3d-shader/hlsl.y
+++ b/libs/vkd3d-shader/hlsl.y
@@ -352,7 +352,7 @@ static struct hlsl_ir_node *add_cast(struct hlsl_ctx *ctx, struct list *instrs,
             list_add_tail(instrs, &store->node.entry);
         }
 
-        if (!(load = hlsl_new_load(ctx, var, NULL, dst_type, *loc)))
+        if (!(load = hlsl_new_var_load(ctx, var, *loc)))
             return NULL;
         list_add_tail(instrs, &load->node.entry);
 
@@ -625,31 +625,18 @@ static struct hlsl_ir_jump *add_return(struct hlsl_ctx *ctx, struct list *instrs
 static struct hlsl_ir_load *add_load_index(struct hlsl_ctx *ctx, struct list *instrs, struct hlsl_ir_node *var_node,
         struct hlsl_ir_node *idx, const struct vkd3d_shader_location *loc)
 {
-    struct hlsl_type *elem_type;
-    struct hlsl_ir_node *offset;
+    const struct hlsl_deref *src;
     struct hlsl_ir_load *load;
-    struct hlsl_block block;
-    struct hlsl_ir_var *var;
-
-    elem_type = hlsl_get_type_from_path_index(ctx, var_node->data_type, idx);
 
     if (var_node->type == HLSL_IR_LOAD)
     {
-        const struct hlsl_deref *src = &hlsl_ir_load(var_node)->src;
-
-        var = src->var;
-        if (!(offset = hlsl_new_offset_from_path_index(ctx, &block, var_node->data_type, src->offset.node, idx, loc)))
-            return NULL;
-        list_move_tail(instrs, &block.instrs);
+        src = &hlsl_ir_load(var_node)->src;
     }
     else
     {
         struct vkd3d_string_buffer *name;
         struct hlsl_ir_store *store;
-
-        if (!(offset = hlsl_new_offset_from_path_index(ctx, &block, var_node->data_type, NULL, idx, loc)))
-            return NULL;
-        list_move_tail(instrs, &block.instrs);
+        struct hlsl_ir_var *var;
 
         name = vkd3d_string_buffer_get(&ctx->string_buffers);
         vkd3d_string_buffer_printf(name, "<deref-%p>", var_node);
@@ -661,9 +648,11 @@ static struct hlsl_ir_load *add_load_index(struct hlsl_ctx *ctx, struct list *in
         if (!(store = hlsl_new_simple_store(ctx, var, var_node)))
             return NULL;
         list_add_tail(instrs, &store->node.entry);
+
+        src = &store->lhs;
     }
 
-    if (!(load = hlsl_new_load(ctx, var, offset, elem_type, *loc)))
+    if (!(load = hlsl_new_load_index(ctx, src, idx, loc)))
         return NULL;
     list_add_tail(instrs, &load->node.entry);
 
@@ -673,39 +662,19 @@ static struct hlsl_ir_load *add_load_index(struct hlsl_ctx *ctx, struct list *in
 static struct hlsl_ir_load *add_load_component(struct hlsl_ctx *ctx, struct list *instrs, struct hlsl_ir_node *var_node,
         unsigned int comp, const struct vkd3d_shader_location *loc)
 {
-    struct hlsl_type *comp_type;
-    struct hlsl_ir_node *offset;
-    struct hlsl_ir_constant *c;
+    const struct hlsl_deref *src;
     struct hlsl_ir_load *load;
-    unsigned int comp_offset;
-    struct hlsl_ir_var *var;
-
-    comp_offset = hlsl_compute_component_offset(ctx, var_node->data_type, comp, &comp_type);
-
-    if (!(c = hlsl_new_uint_constant(ctx, comp_offset, loc)))
-        return NULL;
-    list_add_tail(instrs, &c->node.entry);
-
-    offset = &c->node;
+    struct hlsl_block block;
 
     if (var_node->type == HLSL_IR_LOAD)
     {
-        const struct hlsl_deref *src = &hlsl_ir_load(var_node)->src;
-        struct hlsl_ir_node *add;
-
-        var = src->var;
-        if (src->offset.node)
-        {
-            if (!(add = hlsl_new_binary_expr(ctx, HLSL_OP2_ADD, src->offset.node, &c->node)))
-                return NULL;
-            list_add_tail(instrs, &add->entry);
-            offset = add;
-        }
+        src = &hlsl_ir_load(var_node)->src;
     }
     else
     {
         struct vkd3d_string_buffer *name;
         struct hlsl_ir_store *store;
+        struct hlsl_ir_var *var;
 
         name = vkd3d_string_buffer_get(&ctx->string_buffers);
         vkd3d_string_buffer_printf(name, "<deref-%p>", var_node);
@@ -717,11 +686,13 @@ static struct hlsl_ir_load *add_load_component(struct hlsl_ctx *ctx, struct list
         if (!(store = hlsl_new_simple_store(ctx, var, var_node)))
             return NULL;
         list_add_tail(instrs, &store->node.entry);
+
+        src = &store->lhs;
     }
 
-    if (!(load = hlsl_new_load(ctx, var, offset, comp_type, *loc)))
+    if (!(load = hlsl_new_load_component(ctx, &block, src, comp, loc)))
         return NULL;
-    list_add_tail(instrs, &load->node.entry);
+    list_move_tail(instrs, &block.instrs);
 
     return load;
 }
@@ -1275,7 +1246,7 @@ static struct hlsl_ir_node *add_expr(struct hlsl_ctx *ctx, struct list *instrs,
             list_add_tail(instrs, &store->node.entry);
         }
 
-        if (!(load = hlsl_new_load(ctx, var, NULL, type, *loc)))
+        if (!(load = hlsl_new_var_load(ctx, var, *loc)))
             return NULL;
         list_add_tail(instrs, &load->node.entry);
 
@@ -1632,8 +1603,10 @@ static struct hlsl_ir_node *add_assignment(struct hlsl_ctx *ctx, struct list *in
 {
     struct hlsl_type *lhs_type = lhs->data_type;
     struct hlsl_ir_store *store;
+    struct hlsl_ir_node *offset;
     struct hlsl_ir_expr *copy;
     unsigned int writemask = 0;
+    struct hlsl_block block;
 
     if (assign_op == ASSIGN_OP_SUB)
     {
@@ -1702,10 +1675,13 @@ static struct hlsl_ir_node *add_assignment(struct hlsl_ctx *ctx, struct list *in
         }
     }
 
+    offset = hlsl_new_offset_node_from_deref(ctx, &block, &hlsl_ir_load(lhs)->src, &lhs->loc);
+    list_move_tail(instrs, &block.instrs);
+
     init_node(&store->node, HLSL_IR_STORE, NULL, lhs->loc);
     store->writemask = writemask;
     store->lhs.var = hlsl_ir_load(lhs)->src.var;
-    hlsl_src_from_node(&store->lhs.offset, hlsl_ir_load(lhs)->src.offset.node);
+    hlsl_src_from_node(&store->lhs.offset, offset);
     hlsl_src_from_node(&store->rhs, rhs);
     list_add_tail(instrs, &store->node.entry);
 
@@ -2236,7 +2212,7 @@ static bool intrinsic_mul(struct hlsl_ctx *ctx,
             list_add_tail(params->instrs, &store->node.entry);
         }
 
-    if (!(load = hlsl_new_load(ctx, var, NULL, matrix_type, *loc)))
+    if (!(load = hlsl_new_var_load(ctx, var, *loc)))
         return false;
     list_add_tail(params->instrs, &load->node.entry);
 
@@ -2445,8 +2421,10 @@ static bool add_method_call(struct hlsl_ctx *ctx, struct list *instrs, struct hl
             && object_type->sampler_dim != HLSL_SAMPLER_DIM_CUBEARRAY)
     {
         const unsigned int sampler_dim = hlsl_sampler_dim_count(object_type->sampler_dim);
+        struct hlsl_ir_node *object_load_offset;
         struct hlsl_ir_resource_load *load;
         struct hlsl_ir_node *coords;
+        struct hlsl_block block;
 
         if (object_type->sampler_dim == HLSL_SAMPLER_DIM_2DMS
                 || object_type->sampler_dim == HLSL_SAMPLER_DIM_2DMSARRAY)
@@ -2471,8 +2449,11 @@ static bool add_method_call(struct hlsl_ctx *ctx, struct list *instrs, struct hl
                 hlsl_get_vector_type(ctx, HLSL_TYPE_INT, sampler_dim + 1), loc)))
             return false;
 
+        object_load_offset = hlsl_new_offset_node_from_deref(ctx, &block, &object_load->src, loc);
+        list_move_tail(instrs, &block.instrs);
+
         if (!(load = hlsl_new_resource_load(ctx, object_type->e.resource_format, HLSL_RESOURCE_LOAD,
-                object_load->src.var, object_load->src.offset.node, NULL, NULL, coords, NULL, loc)))
+                object_load->src.var, object_load_offset, NULL, NULL, coords, NULL, loc)))
             return false;
         list_add_tail(instrs, &load->node.entry);
         return true;
@@ -2482,11 +2463,13 @@ static bool add_method_call(struct hlsl_ctx *ctx, struct list *instrs, struct hl
             && object_type->sampler_dim != HLSL_SAMPLER_DIM_2DMSARRAY)
     {
         const unsigned int sampler_dim = hlsl_sampler_dim_count(object_type->sampler_dim);
+        struct hlsl_ir_node *object_load_offset, *sampler_load_offset;
         const struct hlsl_type *sampler_type;
         struct hlsl_ir_resource_load *load;
         struct hlsl_ir_node *offset = NULL;
         struct hlsl_ir_load *sampler_load;
         struct hlsl_ir_node *coords;
+        struct hlsl_block block;
 
         if (params->args_count != 2 && params->args_count != 3)
         {
@@ -2522,11 +2505,18 @@ static bool add_method_call(struct hlsl_ctx *ctx, struct list *instrs, struct hl
                 return false;
         }
 
+        object_load_offset = hlsl_new_offset_node_from_deref(ctx, &block, &object_load->src, loc);
+        list_move_tail(instrs, &block.instrs);
+
+        sampler_load_offset = hlsl_new_offset_node_from_deref(ctx, &block, &sampler_load->src, loc);
+        list_move_tail(instrs, &block.instrs);
+
         if (!(load = hlsl_new_resource_load(ctx, object_type->e.resource_format,
-                HLSL_RESOURCE_SAMPLE, object_load->src.var, object_load->src.offset.node,
-                sampler_load->src.var, sampler_load->src.offset.node, coords, offset, loc)))
+                HLSL_RESOURCE_SAMPLE, object_load->src.var, object_load_offset,
+                sampler_load->src.var, sampler_load_offset, coords, offset, loc)))
             return false;
         list_add_tail(instrs, &load->node.entry);
+
         return true;
     }
     else if ((!strcmp(name, "Gather") || !strcmp(name, "GatherRed") || !strcmp(name, "GatherBlue")
@@ -2537,6 +2527,7 @@ static bool add_method_call(struct hlsl_ctx *ctx, struct list *instrs, struct hl
             || object_type->sampler_dim == HLSL_SAMPLER_DIM_CUBEARRAY))
     {
         const unsigned int sampler_dim = hlsl_sampler_dim_count(object_type->sampler_dim);
+        struct hlsl_ir_node *object_load_offset, *sampler_load_offset;
         enum hlsl_resource_load_type load_type;
         const struct hlsl_type *sampler_type;
         struct hlsl_ir_resource_load *load;
@@ -2545,6 +2536,7 @@ static bool add_method_call(struct hlsl_ctx *ctx, struct list *instrs, struct hl
         struct hlsl_type *result_type;
         struct hlsl_ir_node *coords;
         unsigned int read_channel;
+        struct hlsl_block block;
 
         if (!strcmp(name, "GatherGreen"))
         {
@@ -2626,9 +2618,15 @@ static bool add_method_call(struct hlsl_ctx *ctx, struct list *instrs, struct hl
                 hlsl_get_vector_type(ctx, HLSL_TYPE_FLOAT, sampler_dim), loc)))
             return false;
 
+        object_load_offset = hlsl_new_offset_node_from_deref(ctx, &block, &object_load->src, loc);
+        list_move_tail(instrs, &block.instrs);
+
+        sampler_load_offset = hlsl_new_offset_node_from_deref(ctx, &block, &sampler_load->src, loc);
+        list_move_tail(instrs, &block.instrs);
+
         if (!(load = hlsl_new_resource_load(ctx, result_type,
-                load_type, object_load->src.var, object_load->src.offset.node,
-                sampler_load->src.var, sampler_load->src.offset.node, coords, offset, loc)))
+                load_type, object_load->src.var, object_load_offset,
+                sampler_load->src.var, sampler_load_offset, coords, offset, loc)))
             return false;
         list_add_tail(instrs, &load->node.entry);
         return true;
diff --git a/libs/vkd3d-shader/hlsl_codegen.c b/libs/vkd3d-shader/hlsl_codegen.c
index 373439af..0b577f54 100644
--- a/libs/vkd3d-shader/hlsl_codegen.c
+++ b/libs/vkd3d-shader/hlsl_codegen.c
@@ -21,6 +21,48 @@
 #include "hlsl.h"
 #include <stdio.h>
 
+/* TODO: remove when no longer needed, only used for transform_deref_paths_into_offsets() */
+static void replace_deref_path_with_offset(struct hlsl_ctx *ctx, struct hlsl_deref *deref,
+        struct hlsl_ir_node *instr)
+{
+    struct hlsl_ir_node *offset;
+    struct hlsl_block block;
+
+    if (!deref->var)
+        return;
+
+    if (!(offset = hlsl_new_offset_node_from_deref(ctx, &block, deref, &instr->loc)))
+        return;
+    list_move_before(&instr->entry, &block.instrs);
+
+    hlsl_cleanup_deref(deref);
+    hlsl_src_from_node(&deref->offset, offset);
+}
+
+/* TODO: remove when no longer needed. */
+static bool transform_deref_paths_into_offsets(struct hlsl_ctx *ctx, struct hlsl_ir_node *instr, void *context)
+{
+    switch(instr->type)
+    {
+        case HLSL_IR_LOAD:
+            replace_deref_path_with_offset(ctx, &hlsl_ir_load(instr)->src, instr);
+            return true;
+
+        case HLSL_IR_STORE:
+            replace_deref_path_with_offset(ctx, &hlsl_ir_store(instr)->lhs, instr);
+            return true;
+
+        case HLSL_IR_RESOURCE_LOAD:
+            replace_deref_path_with_offset(ctx, &hlsl_ir_resource_load(instr)->resource, instr);
+            replace_deref_path_with_offset(ctx, &hlsl_ir_resource_load(instr)->sampler, instr);
+            return true;
+
+        default:
+            return false;
+    }
+    return false;
+}
+
 /* Split uniforms into two variables representing the constant and temp
  * registers, and copy the former to the latter, so that writes to uniforms
  * work. */
@@ -1890,6 +1932,8 @@ int hlsl_emit_bytecode(struct hlsl_ctx *ctx, struct hlsl_ir_function_decl *entry
 
     list_move_head(&body->instrs, &ctx->static_initializers);
 
+    transform_ir(ctx, transform_deref_paths_into_offsets, body, NULL); /* TODO: move forward, remove when no longer needed */
+
     LIST_FOR_EACH_ENTRY(var, &ctx->globals->vars, struct hlsl_ir_var, scope_entry)
     {
         if (var->modifiers & HLSL_STORAGE_UNIFORM)
-- 
2.34.1




More information about the wine-devel mailing list