Started 3 mo 9 days ago
Took 11 min

Success Build clang-r362921-t57389-b57389.tar.gz (Jun 9, 2019 11:50:25 PM)

Issues

No known issues detected

Build Log

Revision: 362564
Changes
  1. [DAGCombine] Match a pattern where a wide type scalar value is stored by several narrow stores
    This opportunity is found from spec 2017 557.xz_r. And it is used by the sha encrypt/decrypt. See sha-2/sha512.c

    static void store64(u64 x, unsigned char* y)
    {
        for(int i = 0; i != 8; ++i)
            y[i] = (x >> ((7-i) * 8)) & 255;
    }

    static u64 load64(const unsigned char* y)
    {
        u64 res = 0;
        for(int i = 0; i != 8; ++i)
            res |= (u64)(y[i]) << ((7-i) * 8);
        return res;
    }
    The load64 has been implemented by https://reviews.llvm.org/D26149
    This patch is trying to implement the store pattern.

    Match a pattern where a wide type scalar value is stored by several narrow
    stores. Fold it into a single store or a BSWAP and a store if the targets
    supports it.

    Assuming little endian target:
    i8 *p = ...
    i32 val = ...
    p[0] = (val >> 0) & 0xFF;
    p[1] = (val >> 8) & 0xFF;
    p[2] = (val >> 16) & 0xFF;
    p[3] = (val >> 24) & 0xFF;

    >
    *((i32)p) = val;

    i8 *p = ...
    i32 val = ...
    p[0] = (val >> 24) & 0xFF;
    p[1] = (val >> 16) & 0xFF;
    p[2] = (val >> 8) & 0xFF;
    p[3] = (val >> 0) & 0xFF;

    >
    *((i32)p) = BSWAP(val);

    Differential Revision: https://reviews.llvm.org/D62897 (detail)
    by qshanz
  2. [X86] When promoting i16 compare with immediate to i32, try to use sign_extend for eq/ne if the input is truncated from a type with enough sign its.

    Summary:
    Our default behavior is to use sign_extend for signed comparisons and zero_extend for everything else. But for equality we have the freedom to use either extension. If we can prove the input has been truncated from something with enough sign bits, we can use sign_extend instead and let DAG combine optimize it out. A similar rule is used by type legalization in LegalizeIntegerTypes.

    This gets rid of the movzx in PR42189. The immediate will still take 4 bytes instead of the 2 bytes plus 0x66 prefix a cmp di, 32767 would get, but it avoids a length changing prefix.

    Reviewers: RKSimon, spatel, xbolva00

    Reviewed By: xbolva00

    Subscribers: hiraditya, llvm-commits

    Tags: #llvm

    Differential Revision: https://reviews.llvm.org/D63032 (detail)
    by ctopper
  3. [X86] Disable f32->f64 extload when sse2 is enabled

    Summary:
    We can only use the memory form of cvtss2sd under optsize due to a partial register update. So previously we were emitting 2 instructions for extload when optimizing for speed. Also due to a late optimization in preprocessiseldag we had to handle (fpextend (loadf32)) under optsize.

    This patch forces extload to expand so that it will always be in the (fpextend (loadf32)) form during isel. And when optimizing for speed we can just let each of those pieces select an instruction independently.

    Reviewers: spatel, RKSimon

    Reviewed By: RKSimon

    Subscribers: hiraditya, llvm-commits

    Tags: #llvm

    Differential Revision: https://reviews.llvm.org/D62710 (detail)
    by ctopper

Started by upstream project relay-test-suite-verify-machineinstrs build number 5422
originally caused by:

This run spent:

  • 10 sec waiting;
  • 11 min build duration;
  • 11 min total from scheduled to completion.