The converse is obviously not true. The asymmetry between the super-multiplicativity and sub-multiplicativity arises because the dual norm is always defined as a supremum and never as an infimum.
To see a counterexample, choose a direction in A⊗B, for example a direction of vectors that are of the form a⊗b, and in a very small "ray" vicinity of this direction, define the norm on the tensor product space as
||v||=1000||a||⋅||b||
Super-multiplicativity will still obviously hold because we have increased the norm somewhere on the tensor product space while kept it constant on the rest of it.
However, the dual norm skyrockets by this tiny change because it's a supremum over all c with ||c||≤1 which includes c≈a⊗b where the norm was amplified. Correspondingly, the dual norm for certain dual vectors has been essentially increased to 1,000 times what it was before and is no-longer sub-multiplicative.
Warning: the argument above is wrong. I have misinterpreted |b†a| as something that depends on the original norm but it doesn't. The reverted implication is likely to be right at least for some "convex" norms for which the switching between the norm and the dual norm is fully reversible. Please post more complete answers if you can construct them.
OK, I think that the basic argument may still be easily fixed. Take a natural norm and redefine
||v||=0.001||a||⋅||b||
just for some
v being of the form
C⋅M(a⊗b) where
a,b are some generic vectors,
M is a transformation close to the identity which can't be factorized to the tensor products of transformations on the two spaces, and
C∈R. This reduction of the norm doesn't spoil super-multiplicativity because this condition only constrains the tensor products and this is not one. However, on the dual space,
(a⊗b)D calculated by some dual form will fail to be sub-multiplicative because it's affected even by "nearby" vectors on the original space, and we allowed some very long vectors (according to the original norm) to influence the supremum.
So this won't hold for sufficiently unusual norms. Some kind of convexity that would guarantee that the dualization procedure squares to one could be enough to guarantee that your reverse statement is valid.
This post has been migrated from (A51.SE)