Talk:Hyperparameter optimization

Latest comment: 1 year ago by Eclecticos in topic Add Nelder-Mead?

Add Nelder-Mead?

edit

A standard gradient-free method for iterative improvement is the Nelder-Mead method. Works for a continuous multidimensional space. Should perhaps be mentioned here, along with the general area of black-box (= zeroth-order = gradient-free) optimization. Eclecticos (talk) 16:46, 23 September 2023 (UTC)Reply

About DeepSwarm

edit

At least for deep neural networks, I realize that this article is now partially conceptually obsolete, in that some modern tools can optimize both the architecture and the hyperparameters simultaneously, with the caveat that this combined optimization doesn't apply to transfer learning. Having said that, to the extent that we're maintaining a separation of the hyperparameter optimization and neural architecture search articles, the preferred location of DeepSwarm would definitely be in the latter. I will try and take some steps to add some prominent software for the same to that article, of course including DeepSwarm. Meanwhile, I need an academic reference for DeepSwarm, and I preferably need it to be listed in its readme. --Acyclic (talk) 23:09, 8 May 2019 (UTC)Reply

edit

The section about random search says: "Main article: Random search". Is this link actually correct? The linked article talks about Rastrigin, as if this was the established meaning of the term "Random search". (Maybe it is. I don't know.) But the statement on the current page is that "[Random Search] replaces the exhaustive enumeration of all combinations by selecting them randomly". This contradicts the algorithm on the linked article, I think? Which one is it? --Matumio (talk) 20:45, 3 September 2019 (UTC)Reply

After reading some of the references, I think the link is just plain wrong. I removed it. The are calling it "randomized search" in sklearn. Maybe this would be the better term and avoid the above confusion? --Matumio (talk) 21:10, 3 September 2019 (UTC)Reply