In this talk, we consider the problem of sequentially optimizing an unknown and potentially non-convex function over a continuous and bounded set. These problems are of particular interest when evaluating the function requires numerical simulations with significant computational cost or when the objective function does not satisfy the standard properties used in optimization, such as linearity or convexity. In a first part, we consider the problem of designing sequential strategies which lead to efficient optimization of an unknown function under the only assumption that it has finite Lipschitz constant. We introduce and analyze a first algorithm which assumes the Lipschitz constant to be known. Consistency and minimax rates for this algorithm are proved. An adaptive version of the algorithm is also introduced for the more realistic setup where the Lipschitz constant is unknown and has to be estimated.

In a second part, we propose to explore concepts from ranking theory based on overlaying level sets in order to develop optimization methods that do not rely on the smoothness of the function. We observe that the optimization of the function essentially relies on learning the bipartite rule it induces. Based on this idea, we relate global optimization to bipartite ranking which allows to address the cases of functions with weak regularity properties. Novel meta algorithms for global optimization which rely on the choice of any bipartite ranking method are introduced and theoretical properties are provided in terms of statistical consistency and finite-time convergence toward the optimum. Eventually, the algorithms developed in the thesis are compared to existing state-of-the-art methods over typical benchmark problems for global optimization.