Started 48 min ago
Took 35 min on green-dragon-03

Success Build rL:364295 - C:364283 - #62584 (Jun 25, 2019 4:48:47 AM)

Revisions
  • http://llvm.org/svn/llvm-project/llvm/trunk : 364295
  • http://llvm.org/svn/llvm-project/cfe/trunk : 364283
Changes
  1. [VectorLegalizer] ExpandANY_EXTEND_VECTOR_INREG/ExpandZERO_EXTEND_VECTOR_INREG - widen source vector

    The *_EXTEND_VECTOR_INREG opcodes were relaxed back around rL346784 to support source vector widths that are smaller than the output - it looks like the legalizers were never updated to account for this.

    This patch inserts the smaller source vector into an undef vector of the same width of the result before performing the shuffle+bitcast to correctly handle this.

    Part of the yak shaving to solve the crashes from rL364264 and rL364272 (detail/ViewSVN)
    by rksimon
  2. [ARM] Explicit lowering of half <-> double conversions.

    If an FP_EXTEND or FP_ROUND isel dag node converts directly between
    f16 and f32 when the target CPU has no instruction to do it in one go,
    it has to be done in two steps instead, going via f32.

    Previously, this was done implicitly, because all such CPUs had the
    storage-only implementation of f16 (i.e. the only thing you can do
    with one at all is to convert it to/from f32). So isel would legalize
    the f16 into an f32 as soon as it saw it, by inserting an fp16_to_fp
    node (or vice versa), and then the fp_extend would already be f32->f64
    rather than f16->f64.

    But that technique can't support a target CPU which has full f16
    support but _not_ f64, such as some variants of Arm v8.1-M. So now we
    provide custom lowering for FP_EXTEND and FP_ROUND, which checks
    support for f16 and f64 and decides on the best thing to do given the
    combination of flags it gets back.

    Reviewers: dmgreen, samparker, SjoerdMeijer

    Subscribers: javed.absar, kristof.beyls, hiraditya, llvm-commits

    Tags: #llvm

    Differential Revision: https://reviews.llvm.org/D60692 (detail/ViewSVN)
    by statham
  3. [ARM] Extra MVE-related testing.

    This adds some extra RUN lines to existing test files, to check that
    things that worked in previous architecture versions haven't
    accidentally stopped working in 8.1-M. Also we add some new tests: a
    test of scalar floating point instructions that could be easily
    confused with the similar-looking vector ones at assembly time, a test
    of basic load/store/move access to the FP registers (which has to work
    even in integer-only MVE); and one final check of the really obvious
    case where turning off MVE should make sure MVE instructions really
    are rejected.

    Reviewers: dmgreen, samparker, SjoerdMeijer, t.p.northover

    Subscribers: javed.absar, kristof.beyls, llvm-commits

    Tags: #llvm

    Differential Revision: https://reviews.llvm.org/D62682 (detail/ViewSVN)
    by statham
  4. [ARM] Add remaining miscellaneous MVE instructions.

    This final batch includes the tail-predicated versions of the
    low-overhead loop instructions (LETP); the VPSEL instruction to select
    between two vector registers based on the predicate mask without
    having to open a VPT block; and VPNOT which complements the predicate
    mask in place.

    Reviewers: dmgreen, samparker, SjoerdMeijer, t.p.northover

    Subscribers: javed.absar, kristof.beyls, hiraditya, llvm-commits

    Tags: #llvm

    Differential Revision: https://reviews.llvm.org/D62681 (detail/ViewSVN)
    by statham
  5. [ARM] Add MVE vector load/store instructions.

    This adds the rest of the vector memory access instructions. It
    includes contiguous loads/stores, with an ordinary addressing mode
    such as [r0,#offset] (plus writeback variants); gather loads and
    scatter stores with a scalar base address register and a vector of
    offsets from it (written [r0,q1] or similar); and gather/scatters with
    a vector of base addresses (written [q0,#offset], again with
    writeback). Additionally, some of the loads can widen each loaded
    value into a larger vector lane, and the corresponding stores narrow
    them again.

    To implement these, we also have to add the addressing modes they
    need. Also, in AsmParser, the `isMem` query function now has
    subqueries `isGPRMem` and `isMVEMem`, according to which kind of base
    register is used by a given memory access operand.

    I've also had to add an extra check in `checkTargetMatchPredicate` in
    the AsmParser, without which our last-minute check of `rGPR` register
    operands against SP and PC was failing an assertion because Tablegen
    had inserted an immediate 0 in place of one of a pair of tied register
    operands. (This matches the way the corresponding check for `MCK_rGPR`
    in `validateTargetOperandClass` is guarded.) Apparently the MVE load
    instructions were the first to have ever triggered this assertion, but
    I think only because they were the first to have a combination of the
    usual Arm pre/post writeback system and the `rGPR` class in particular.

    Reviewers: dmgreen, samparker, SjoerdMeijer, t.p.northover

    Subscribers: javed.absar, kristof.beyls, hiraditya, llvm-commits

    Tags: #llvm

    Differential Revision: https://reviews.llvm.org/D62680 (detail/ViewSVN)
    by statham

Started by an SCM change (2 times)

This run spent:

  • 23 min waiting;
  • 35 min build duration;
  • 59 min total from scheduled to completion.
LLVM/Clang Warnings: 1 warning.
Test Result (no failures)