Skip to content
  • Home Home Home, current page.
  • About

Saved searches

  • Remove
  • In this conversation
    Verified accountProtected Tweets @
Suggested users
  • Verified accountProtected Tweets @
  • Verified accountProtected Tweets @
  • Language: English
    • Bahasa Indonesia
    • Bahasa Melayu
    • Català
    • Čeština
    • Dansk
    • Deutsch
    • English UK
    • Español
    • Filipino
    • Français
    • Hrvatski
    • Italiano
    • Magyar
    • Nederlands
    • Norsk
    • Polski
    • Português
    • Română
    • Slovenčina
    • Suomi
    • Svenska
    • Tiếng Việt
    • Türkçe
    • Ελληνικά
    • Български език
    • Русский
    • Српски
    • Українська мова
    • עִבְרִית
    • العربية
    • فارسی
    • मराठी
    • हिन्दी
    • বাংলা
    • ગુજરાતી
    • தமிழ்
    • ಕನ್ನಡ
    • ภาษาไทย
    • 한국어
    • 日本語
    • 简体中文
    • 繁體中文
  • Have an account? Log in
    Have an account?
    · Forgot password?

    New to Twitter?
    Sign up
togelius's profile
Julian Togelius
Julian Togelius
Julian Togelius
@togelius

Tweets

Julian Togelius

@togelius

AI and games researcher. Associate professor at NYU; Editor-in-Chief of @IEEETxnOnGames; director of @NYUGameLab; co-founder of http://modl.ai .

New York City
julian.togelius.com
Joined January 2009

Tweets

  • © 2019 Twitter
  • About
  • Help Center
  • Terms
  • Privacy policy
  • Cookies
  • Ads info
Dismiss
Previous
Next

Go to a person's profile

Saved searches

  • Remove
  • In this conversation
    Verified accountProtected Tweets @
Suggested users
  • Verified accountProtected Tweets @
  • Verified accountProtected Tweets @

Promote this Tweet

Block

  • Tweet with a location

    You can add location information to your Tweets, such as your city or precise location, from the web and via third-party applications. You always have the option to delete your Tweet location history. Learn more

    Your lists

    Create a new list


    Under 100 characters, optional

    Privacy

    Copy link to Tweet

    Embed this Tweet

    Embed this Video

    Add this Tweet to your website by copying the code below. Learn more

    Add this video to your website by copying the code below. Learn more

    Hmm, there was a problem reaching the server.

    By embedding Twitter content in your website or app, you are agreeing to the Twitter Developer Agreement and Developer Policy.

    Preview

    Why you're seeing this ad

    Log in to Twitter

    · Forgot password?
    Don't have an account? Sign up »

    Sign up for Twitter

    Not on Twitter? Sign up, tune into the things you care about, and get updates as they happen.

    Sign up
    Have an account? Log in »

    Two-way (sending and receiving) short codes:

    Country Code For customers of
    United States 40404 (any)
    Canada 21212 (any)
    United Kingdom 86444 Vodafone, Orange, 3, O2
    Brazil 40404 Nextel, TIM
    Haiti 40404 Digicel, Voila
    Ireland 51210 Vodafone, O2
    India 53000 Bharti Airtel, Videocon, Reliance
    Indonesia 89887 AXIS, 3, Telkomsel, Indosat, XL Axiata
    Italy 4880804 Wind
    3424486444 Vodafone
    » See SMS short codes for other countries

    Confirmation

     

    Welcome home!

    This timeline is where you’ll spend most of your time, getting instant updates about what matters to you.

    Tweets not working for you?

    Hover over the profile pic and click the Following button to unfollow any account.

    Say a lot with a little

    When you see a Tweet you love, tap the heart — it lets the person who wrote it know you shared the love.

    Spread the word

    The fastest way to share someone else’s Tweet with your followers is with a Retweet. Tap the icon to send it instantly.

    Join the conversation

    Add your thoughts about any Tweet with a Reply. Find a topic you’re passionate about, and jump right in.

    Learn the latest

    Get instant insight into what people are talking about now.

    Get more of what you love

    Follow more accounts to get instant updates about topics you care about.

    Find what's happening

    See the latest conversations about any topic instantly.

    Never miss a Moment

    Catch up instantly on the best stories happening as they unfold.

    Julian Togelius‏ @togelius 29 Jun 2018

    Deep reinforcement learning overfits. Often, a trained network can only play the particular level(s) you trained it on! In our new paper, we show how to train more general networks with procedural level generation, generating progressively harder levels. https://arxiv.org/abs/1806.10729 pic.twitter.com/JDNYPZuAHV

    8:57 AM - 29 Jun 2018
    • 203 Retweets
    • 664 Likes
    • Prashant K. Sharma Aditya Deshpande Ran Feldesh kaalam.ai Kamil Rocki Aditya Sanghi Bret Colloff 王涛WangTao Natalia Díaz Rodríguez
    9 replies 203 retweets 664 likes
      1. New conversation
      2. Julian Togelius‏ @togelius 29 Jun 2018

        The paper, written by @nojustesen @ruben_torrado @FilipoGiovanni @Amidos2006, me and @risi1979, builds on the General Video Game AI framework, which includes more than a hundred different games and lets you easily modify games and levels (and generate new ones).

        1 reply 2 retweets 14 likes
        Show this thread
      3. Julian Togelius‏ @togelius 29 Jun 2018

        We also build on ours and others' research on procedural content generation for games, a research field studying algorithms that can create new game content such as levels. Useful not only for game development but also for AI testing. More on PCG: http://pcgbook.com/ 

        1 reply 1 retweet 7 likes
        Show this thread
      4. Julian Togelius‏ @togelius 29 Jun 2018

        The level generators we use in our paper allow for generating levels for three different games, with different difficulty levels. So we start training agents on very simple levels, and as soon as they learn to play these levels well we increase the difficulty level.pic.twitter.com/y6J7VEIDid

        1 reply 4 retweets 13 likes
        Show this thread
      5. Julian Togelius‏ @togelius 29 Jun 2018

        By training this way, we not only find agents that generalize better to unseen levels, but we can also learn to play hard levels which we could not learn to play if we started from scratch.pic.twitter.com/ZwQmsUcLsg

        1 reply 2 retweets 10 likes
        Show this thread
      6. Julian Togelius‏ @togelius 29 Jun 2018

        We are taking the old idea of increasing the difficulty as the agent improves, which has variously been called incremental evolution, staged learning and curriculum learning, and combining it with procedural content generation.

        1 reply 2 retweets 12 likes
        Show this thread
      7. Julian Togelius‏ @togelius 29 Jun 2018

        Our results point to the need for variable environments for reinforcement learning. Using procedural content generation when learning to play games seems to be more or less necessary to achieve policies that are not brittle and specialized.

        1 reply 2 retweets 6 likes
        Show this thread
      8. Julian Togelius‏ @togelius 29 Jun 2018

        When training on a single game with fixed, small set of levels, you are setting yourself up for overfitting. If your performance evaluation is based on the same set of levels, you are testing on the training set, which is considered a big no-no in machine learning (but not RL?).

        1 reply 2 retweets 2 likes
        Show this thread
      9. Julian Togelius‏ @togelius 29 Jun 2018

        In particular, this applies to the very popular practice of training agents to play Atari games in the ALE framework. Our results suggest that doing so encourages overfitting, and learning very brittle strategies.

        2 replies 1 retweet 4 likes
        Show this thread
      10. Julian Togelius‏ @togelius 29 Jun 2018

        In other words, reinforcement learning researchers - including but not limited to those working on games - should adopt procedural level generation as a standard practice. The @gvgai framework provides a perfect platform for this.

        3 replies 3 retweets 12 likes
        Show this thread
      11. Julian Togelius‏ @togelius 29 Jun 2018

        Our previous paper explaining the GVGAI learning track framework which we use for this research can be found here: https://arxiv.org/abs/1806.02448 

        1 reply 2 retweets 5 likes
        Show this thread
      12. End of conversation
      1. New conversation
      2. hardmaru‏ @hardmaru 29 Jun 2018
        Replying to @togelius

        Really cool paper! To be fair, many environments created by @robo_skills for OpenAI Gym, such as bipedalwalker-hardcore and carracing are also procedurally generated, although not to the extent of your approach! I still find simple procedural generation helps prevent overfitting.pic.twitter.com/mRQjlBgMKq

        1 reply 1 retweet 13 likes
      3. Julian Togelius‏ @togelius 29 Jun 2018
        Replying to @hardmaru @robo_skills

        Thanks! I agree, even a little bit of random variation certainly helps, and thanks for pointing to that bipedal walker environment - we should cite it. I do believe though that the more thorough the PCG is, the more we challenge the generalization capacity of the agent.

        1 reply 0 retweets 3 likes
      4. Julian Togelius‏ @togelius 29 Jun 2018
        Replying to @togelius @hardmaru @robo_skills

        I've long wanted to build an environment which gradually generalizes and complexifies forever until you get to "actually general" intelligence. Let's say that @gvgai is a baby step in that direction...

        2 replies 0 retweets 6 likes
      5. Thomas Miconi‏ @ThomasMiconi 29 Jun 2018
        Replying to @togelius @hardmaru and

        Many people thinking about this.... :)

        1 reply 0 retweets 6 likes
      6. hardmaru‏ @hardmaru 29 Jun 2018
        Replying to @ThomasMiconi @togelius and

        hardmaru Retweeted hardmaru

        Would be cool to try out ideas like the one in @kenneth0stanley's paper on "Minimal Criterion Coevolution: A New Approach to Open-Ended Search" using @gvgai's platform.https://twitter.com/hardmaru/status/870902132399931393 …

        hardmaru added,

        hardmaru @hardmaru
        Minimal Criterion Coevolution: A New Approach to Open-Ended Search. Latest work by Brant @Kenneth0Stanley @GECCO2017 http://eplex.cs.ucf.edu/papers/brant_gecco17.pdf … pic.twitter.com/lOzsqf3ud2
        1 reply 3 retweets 12 likes
      7. Julian Togelius‏ @togelius 29 Jun 2018
        Replying to @hardmaru @ThomasMiconi and

        Definitely! Competitive coevolution of levels and controllers (where either of the parts is either evolution, gradient descent or possible something else) has been on my list for a long time now. If only I had more people...

        1 reply 0 retweets 6 likes
      8. iandanforth‏ @iandanforth 29 Jun 2018
        Replying to @togelius @hardmaru and

        Open a 'Needs Help' ticket on github and tweet it?

        1 reply 0 retweets 2 likes
      9. Julian Togelius‏ @togelius 29 Jun 2018
        Replying to @iandanforth @hardmaru and

        That is an interesting approach to research collaboration. Not sure what I think of it...

        1 reply 0 retweets 2 likes
      10. 2 more replies
      1. New conversation
      2. mike cook  🌱 🤖 🌱‏ @mtrc 29 Jun 2018
        Replying to @togelius

        I love this idea a whole bunch. It’s a Rocky training montage but for AI.

        1 reply 0 retweets 5 likes
      3. Julian Togelius‏ @togelius 29 Jun 2018
        Replying to @mtrc

        This is a beautiful picture...

        1 reply 0 retweets 1 like
      4. mike cook  🌱 🤖 🌱‏ @mtrc 29 Jun 2018
        Replying to @togelius

        Just an idea for the conference talk; that’s all I’m saying.

        1 reply 0 retweets 0 likes
      5. Julian Togelius‏ @togelius 29 Jun 2018
        Replying to @mtrc

        Oh yes. Might need to edit it down so it fits in a minute while we describe the algorithm over it...https://www.youtube.com/watch?v=632hVDL_N6w …

        0 replies 0 retweets 0 likes
      6. End of conversation

    Loading seems to be taking a while.

    Twitter may be over capacity or experiencing a momentary hiccup. Try again or visit Twitter Status for more information.

      Promoted Tweet

      false

      • © 2019 Twitter
      • About
      • Help Center
      • Terms
      • Privacy policy
      • Cookies
      • Ads info