Resources

Introduction

Last week, we went over higher order functions in Kotlin. We learned how higher order functions can accept functions as parameters and are also able to return functions. This week, we will take a look at lambdas. Lambdas are another type of function and they are very popular in the functional programming world.



Logic & Data

Computer programs are made up of two parts: logic and data. Usually, logic is described in functions and data is passed to those functions. The functions do things with the data, and return a result. When we write a function we would typically create a named function. As we saw last week, this is a typical named function:

fun hello(name: String): String {
return "Hello, $name"
}

Then you can call this function:

fun main() {
println(hello("Matt"))
}

Which gives us the result:

Hello, Matt

Functions as Data

There is a concept in the functional programming world where functions are treated as data. Lambdas (functions as data) can do the same thing as named functions, but with lambdas, the content of a given function can be passed directly into other functions. A lambda can also be assigned to a variable as though it were just a value.

Lambda Syntax

Lambdas are similar to named functions but lambdas do not have a name and the lambda syntax looks a little different. Whereas a function in Kotlin would look like this:

fun hello() {
return "Hello World"
}

The lambda expression would look like this:

{ "Hello World" }

Here is an example with a parameter:

fun(name: String) {
return "Hello, ${name}"
}

The lambda version:

{ name: String -> "Hello, $name" }

You can call the lambda by passing the parameter to it in parentheses after the last curly brace:

{ name: String -> "Hello, $name" }("Matt")

It’s also possible to assign a lambda to a variable:

val hello = { name: String -> "Hello, $name" }

You can then call the variable the lambda has been assigned to, just as if it was a named function:

hello("Matt")

Lambdas provide us with a convenient way to pass logic into other functions without having to define that logic in a named function. This is very useful when processing lists or arrays of data. We’ll take a look at processing lists with lambdas in the next post!

Views: 131

Happy 10th year, JCertif!

Notes

Welcome to Codetown!

Codetown is a social network. It's got blogs, forums, groups, personal pages and more! You might think of Codetown as a funky camper van with lots of compartments for your stuff and a great multimedia system, too! Best of all, Codetown has room for all of your friends.

When you create a profile for yourself you get a personal page automatically. That's where you can be creative and do your own thing. People who want to get to know you will click on your name or picture and…
Continue

Created by Michael Levin Dec 18, 2008 at 6:56pm. Last updated by Michael Levin May 4, 2018.

Looking for Jobs or Staff?

Check out the Codetown Jobs group.

 

Enjoy the site? Support Codetown with your donation.



InfoQ Reading List

Presentation: The Time is Now: Delight Your Developers with User-Centric Platforms & Practices

Ana Petkovska discusses creating platform teams, establishing the team API, engagement of early adopters, easing adoption and providing a high quality product.

By Ana Petkovska

DeepSeek Open-Sources DeepSeek-V3, a 671B Parameter Mixture of Experts LLM

DeepSeek open-sourced DeepSeek-V3, a Mixture-of-Experts (MoE) LLM containing 671B parameters. It was pre-trained on 14.8T tokens using 2.788M GPU hours and outperforms other open-source models on a range of LLM benchmarks, including MMLU, MMLU-Pro, and GPQA.

By Anthony Alford

Article: A Framework for Building Micro Metrics for LLM System Evaluation

LLM accuracy is a challenging topic to address and is much more multi-dimensional than a simple accuracy score. Denys Linkov introduces a framework for creating micro metrics to evaluate LLM systems, focusing on goal-aligned metrics that improve performance and reliability. By adopting an iterative "crawl, walk, run" methodology, teams can incrementally develop observability.

By Denys Linkov

Google Releases Experimental AI Reasoning Model

Google has introduced Gemini 2.0 Flash Thinking Experimental, an AI reasoning model available in its AI Studio platform.

By Daniel Dominguez

Google Vertex AI Provides RAG Engine for Large Language Model Grounding

Vertex AI RAG Engine is a managed orchestration service aimed to make it easier to connect large language models (LLMs) to external data sources to be more up-to-date, generate more relevant responses, and hallucinate less.

By Sergio De Simone

© 2025   Created by Michael Levin.   Powered by

Badges  |  Report an Issue  |  Terms of Service