The reason is that the continuum involves a notion of limit where you get an infinity of points in every little region. This is different from the potential infinity of say, sentences, where the sentence can get longer and longer and then you get an infinite number of sentences, but only for very long sentences which are further away from the realizable ones. In this case, in every box, you have an infinite number of distinct points, each of which is as accessible as any other (at least in a naive view of the continuum).
When you are doing mathematics, you are describing a continuous object with a string of symbols. This means that the only way to define quantities on a continuum is to define them on some approximate notion of a continuum, and then take the limit that the approximation becomes dense. When you first defined real numbers, in grade school, you defined numbers with a finite number of decimals, or perhaps rational numbers, and then abstracted the notion of real numbers as infinite sequences of decimals, or as limiting values of Cauchy sequences of rationals. In either case, you have a discrete structure with only a potential infinity, and the real numbers emerge when you take the continuum limit, either by allowing the decimals to grow arbitrarily long, or by allowing the denominators of the rational number to grow arbitrarily large.
The reason we don't sweat this is because we are intuitively familiar with geometry, so we have an immediate understanding that this process makes sense. But it is far from intuitive that the real numbers make sense. In quantum field theory, one has to deal squarely with the fact that the real numbers are actually a sophisticated idea, since you are defining quantum fluctuations in fields, which have separate degrees of freedom at every one of continuously many points. There is a cutoff at every energy, since exciting the short-distance modes requires high energy, so physically, the behavior of fields at low energies should not depend on the high-energy field modes. But if this is true, then you should be able to define the field theory as a continuum theory, as a limit as continuous space emerges from a discrete structure.
When formulating quantum field theory, you start with a regularization because that's the way every continuum theory is defined, it's a limit. This is no different in principle than defining a differential equation. If somebody asked you "what does $\dot{x} = \alpha \sqrt{x} $ mean?" You would have to say that the limit of the next value of x, minus the current value of x, is the time increment times the $\sqrt{x}$, in the limit that the stepsize is small. We write it as a differential equation without a stepsize, because the limit makes sense. The definition of derivative as a quotient is designed to ensure that this is so, you divide by the step-size to the first power, because this is how the increment of a differentiable function scales with step-size.
If you are doing stochastic calculus, random walks on continuous time, the increment of distance you move scales as the square root of the stepsize. This means that the ordinary notion of derivative diverges when you take the small step-size limit. The definition of stochastic calculus defines derivatives of random walks as a distribution, so that only its integral makes sense, not its value at one point, and you get some funny commutation relations, like
$$ x(t+\epsilon)\dot{x}(t) - x(t)\dot{x}(t) = 1$$
where the equality is understood as a distribution identity, it is saying that the integral of the randomly fluctuating quantity on the left over any interval is the same as the integral of 1 over the same interval, and the fluctuations are zero over a finite size interval in the limit of small steps. This is the stochastic version of the Heisenberg commutation relation in quantum mechanics.
For quantum fields, you have to analogously make an approximation to the continuum, say a lattice with step-size $\epsilon$. If the results make sense as field theory, if they have a continuum limit, they don't depend on the step-size (the inverse cutoff) you choose, as long as it is small. The only difference is that the scaling laws for the parameters is different from ordinary calculus (or from stochastic calculus).
To see explicitly how a lattice limit produces a continuous field theory, you can consider the case of the Ising model. If you make the lattice small while simultaneously bringing the temperature closer to the critical temperature, so that the correlation length stays fixed as the lattice shrinks, you end up with long-ranged fluctuations in the spin described by a continuous field. The field is the number of up spins minus the number of down spins in a ball containing many lattice sites, where the ball size shrinks relative to the correlation length, but grows relative to the lattice spacing. You rescale the field by the lattice spacing to a certain power and the ball-size to a certain power (you choose the powers to get a finite limit in the small $\epsilon$ limit, independent of the ball size), and then you have defined a field theory. In this case, it is a scalar field theory with quartic self interactions, and in three dimensions or in two, it converges to a sensible unique limit which depends only on the correlation length (the coupling flows to a fixed point in the long-distance theory). In five dimensions and above, the theory converges to a free field theory. In four dimensions, it converges to a free field theory, but very very slowly, the coupling only goes to zero as the log of the lattice spacing, and if you see a scalar quartically interacting field in nature with a nonzero interaction, you can conclude that the cutoff scale is above the lattice size which would make the coupling smaller than what you observe.
I gave a qualitative description because you asked for this, but the full rigorous description of the limiting process is not yet fully worked out, although it is known heuristically for most cases of interest. This is an important open problem in mathematical physics.