Not to my knowledge. The Itanium CPU architecture was RISC with a modified VLIW parallel instruction execution strategy (Explicitly Parallel Instruction Computing, which delineated instruction bundles and supported speculative execution). This is nothing like current x86 superscalar instruction execution.
I suppose one could say the Itanium and Xeon cache coherency schemes have some similarity, since both use snoopy caches with snoop filters, but AMD appears to have a similar strategy (though the terminology is different). I'm not convinced there was anything of unique value learned in the Itanium effort, but Itanium did use switched point-to-point links for multi-CPU NUMA coherency, rather than the typical Intel memory bus of the day (called FSB, or Front-Side Bus), but modern UPI links in Xeons don't seem to be derived from the Scalability Port design used by Itanium.
My impression is that the only thing Intel really learned after Itanium was to stay far away from VLIW for general purpose computing and stick with superscalar instruction parallelism. VLIW can be used for reducing the hardware complexity and design cost of specialized processors, like in Intel's Gaudi AI accelerators, or in the Kalray networking and security processors, which I suspect run software that is manually coded at the assembler level for best use of VLIW (I'm guessing). But for general purpose processing the VLIW compilers become an overwhelming software complexity problem.