It can in principle, and this is exactly where the kernel trick is useful.
The standard SVM formulation can only give linear classifiers. But, if you project your data into feature space (a higher, possibly infinite dimensional space), a linear separator in that space can be a circle in your original space. Since you can not do explicit computations in an infinite dimensional space, the kernel trick lets you get away without doing them at all. You can thus get an inner product value of two infinite dimensional vectors using a kernel function. So classifiers that only require inner product values and never the explicit vectors can exploit the kernel trick. i.e., SVM, logistic regression, etc.
That being said, choosing the appropriate kernel function is not always straightforward for your data.
The Radial Basis Function kernel K(x,y) = exp(-(x-y)^2/gamma) is a very general kernel function that can get you this.
Finally, whether you actually get such a classifier depends on your data, and as I had mentioned setting the parameters (like gamma) is not very straightforward. Typically you may have to try with various values and see what works for your data.
Train your SVM (or any kernel method) and see what classifier it gives (run the classifier on data points near the boundary you want to detect to see where the exact boundary lies). This would only be to see how the classification function looks like.