SuccessChanges

Summary

  1. AMDGPU/GlobalISel: Lower G_FREM (details)
  2. Free the memory allocated by mlirOperationStateAddXXX methods in mlirOperationCreate. (details)
  3. [DebugInfo] Fix initialization of DwarfCompileUnit::LabelBegin. (details)
  4. [lldb][NFC] Remove dead code in BreakpointResolverAddress (details)
  5. [ScalarizeMaskedMemIntrin] Scalarize constant mask expandload as shuffle(build_vector,pass_through) (details)
Commit 0d58d9e8fb937b422baaf96dc7c60e7c3a128302 by petar.avramovic
AMDGPU/GlobalISel: Lower G_FREM

Add custom lower for G_FREM.

Differential Revision: https://reviews.llvm.org/D84324
The file was modifiedllvm/include/llvm/CodeGen/GlobalISel/MachineIRBuilder.h
The file was modifiedllvm/lib/Target/AMDGPU/AMDGPULegalizerInfo.h
The file was modifiedllvm/lib/Target/AMDGPU/AMDGPULegalizerInfo.cpp
The file was addedllvm/test/CodeGen/AMDGPU/GlobalISel/frem.ll
Commit 69eb7e36aa3c71997811054bb31d4546b08bfff0 by zinenko
Free the memory allocated by mlirOperationStateAddXXX methods in mlirOperationCreate.

Previously, the memory leaks on heap. Since the MlirOperationState is not intended to be used again after mlirOperationCreate, the patch simplify frees the memory in mlirOperationCreate instead of creating any new API.

Reviewed By: ftynse

Differential Revision: https://reviews.llvm.org/D85629
The file was modifiedmlir/lib/CAPI/IR/IR.cpp
Commit d400606f8cb2474a436df42d7d6c897ba6c9c4ee by ikudrin
[DebugInfo] Fix initialization of DwarfCompileUnit::LabelBegin.

This also fixes the condition in the assertion in
DwarfCompileUnit::getLabelBegin() because it checked something unrelated
to the returned value.

Differential Revision: https://reviews.llvm.org/D85437
The file was modifiedllvm/lib/CodeGen/AsmPrinter/DwarfCompileUnit.h
Commit 8119d6c14695b436adb66f0d891863eeea9e62ad by Raphael Isemann
[lldb][NFC] Remove dead code in BreakpointResolverAddress
The file was modifiedlldb/source/Breakpoint/BreakpointResolverAddress.cpp
Commit c0c3b9a25feec84e739cc3a2b30e1ac336648799 by llvm-dev
[ScalarizeMaskedMemIntrin] Scalarize constant mask expandload as shuffle(build_vector,pass_through)

As noticed on D66004, scalarization of an expandload with a constant mask as a chain of irregular loads+inserts makes it tricky to optimize before lowering, resulting in difficulties in merging loads etc.

This patch instead scalarizes the expansion to a build_vector(load0, load1, undef, load2,....) style pattern and then performs a blend shuffle with the pass through vector. This allows us to more easily make use of all the build_vector combines, merging of consecutive loads etc.

Differential Revision: https://reviews.llvm.org/D85416
The file was modifiedllvm/lib/CodeGen/ScalarizeMaskedMemIntrin.cpp
The file was modifiedllvm/test/CodeGen/X86/masked_expandload.ll