Developing real-time data pipelines with Spring and Kafkamarius_bogoevici
Talk given at the Apache Kafka NYC Meetup, October 20, 2015.
http://www.meetup.com/Apache-Kafka-NYC/events/225697500/
Kafka has emerged as a clear choice for a high-throughput, low latency messaging system that addresses the needs of high-performance streaming applications. The Spring Framework has been, in the last decade, the de-facto standard for developing enterprise Java applications, providing a simple and powerful programming model that allows developers to focus on the business needs, leaving the boilerplate and middleware integration to the framework itself. In fact, it has evolved into a rich and powerful ecosystem, with projects focusing on specific aspects of enterprise software development - like Spring Boot, Spring Data, Spring Integration, Spring XD, Spring Cloud Stream/Data Flow to name just a few.
In this presentation, Marius Bogoevici from the Spring team will take the perspective of the Kafka user, and show, with live demos, how the various projects in the Spring ecosystem address their needs:
- how to build simple data integration applications using Spring Integration Kafka;
- how to build sophisticated data pipelines with Spring XD and Kafka;
- how to build cloud native message-driven microservices using Spring Cloud Stream and Kafka, and how to orchestrate them using Spring Cloud Data Flow;
The Dream Stream Team for Pulsar and SpringTimothy Spann
THE DREAM STREAM TEAM FOR PULSAR AND SPRING
TIM SPANN - STREAMNATIVE
For building Java application, Spring is the universal answer as it supplies all the connectors and integrations one could want. The same is true for Apache Pulsar as it provides connectors, integration and flexibility to any use case. Apache Pulsar has a robust native Java library to use with Spring as well as other protocol options.
ApachePulsar provides a cloud native, geo-replicated unified messaging platform that allows for many messaging paradims. This lends it self well to upgrading existing applications as Pulsar supports using libraries for WebSockets, MQTT, Kafka, JMS, AMQP and RocketMQ. In this talk I will build some example applications utilizing several different protocols for building a variety of applications from IoT to Microservices to Log Analytics.
https://2022.springio.net/sessions/the-dream-stream-team-for-pulsar-and-spring
SPRING I/O 2022
THE CONFERENCE
Spring I/O is the leading european conference focused on the Spring Framework ecosystem.
Join us in our 9th in-person edition!
May 26/27, 2022 Barcelona, Spain
Microservices Platform with Spring Boot, Spring Cloud Config, Spring Cloud Ne...Tin Linn Soe
This document provides an overview of microservices architecture using Spring Boot, Eureka, and Spring Cloud. It describes using Spring Boot for cloud-native development, Eureka for service registration and discovery, Spring Cloud Config for distributed configuration, Zuul proxy for API gateway, Feign for communication between services, Sleuth for distributed request tracing, and demonstrates a sample application with three microservices that register with Eureka and fetch configurations from Config Server while communicating through Feign and tracing logs with Sleuth. Diagrams and code snippets are presented to illustrate the concepts and architecture.
This document discusses optimizing and profiling Golang REST APIs. It explains that profiling measures program performance to aid optimization. The steps are to deploy an application, conduct profiling to identify slow code, analyze profiling data using tools like pprof, decide on solutions like using goroutines or caching, test the impact of changes, and repeat until performance goals are met before deployment. Short profiling demos are provided and load testing results show performance improvements from optimization solutions.
Presented at GSMA Mobile Connect + FIDO Alliance: The Future of Strong Authentication
By: Rolf Lindemann, Senior Director of Technology and Products, Nok Nok Labs
Detect HTTP Brute Force attack using Snort IDS/IPS on PFSense FirewallHuda Seyam
This project is devoted to presenting a solution to protect web pages that acquire passwords and user names against HTML brute force.
By performing a brute force password auditing against web servers that are using HTTP authentication with Nmap and detect this attack using snort IDS/IPS on PFSense Firewall.
The document introduces the Orion Context Broker, which is a component of FIWARE that provides an API for managing context information. It describes how the Context Broker can be used to gather and share contextual data from various sources to enable smart applications. Key features of the Context Broker include allowing context producers to publish and update data, consumers to retrieve data through queries, and consumers to subscribe to receive notifications about data updates through subscriptions. Examples are provided for common operations like creating and updating entities, attributes, subscriptions, and using filters.
Technical Deep Dive: Using Apache Kafka to Optimize Real-Time Analytics in Fi...confluent
Watch this talk here: https://www.confluent.io/online-talks/using-apache-kafka-to-optimize-real-time-analytics-financial-services-iot-applications
When it comes to the fast-paced nature of capital markets and IoT, the ability to analyze data in real time is critical to gaining an edge. It’s not just about the quantity of data you can analyze at once, it’s about the speed, scale, and quality of the data you have at your fingertips.
Modern streaming data technologies like Apache Kafka and the broader Confluent platform can help detect opportunities and threats in real time. They can improve profitability, yield, and performance. Combining Kafka with Panopticon visual analytics provides a powerful foundation for optimizing your operations.
Use cases in capital markets include transaction cost analysis (TCA), risk monitoring, surveillance of trading and trader activity, compliance, and optimizing profitability of electronic trading operations. Use cases in IoT include monitoring manufacturing processes, logistics, and connected vehicle telemetry and geospatial data.
This online talk will include in depth practical demonstrations of how Confluent and Panopticon together support several key applications. You will learn:
-Why Apache Kafka is widely used to improve performance of complex operational systems
-How Confluent and Panopticon open new opportunities to analyze operational data in real time
-How to quickly identify and react immediately to fast-emerging trends, clusters, and anomalies
-How to scale data ingestion and data processing
-Build new analytics dashboards in minutes
Event Driven Architecture with a RESTful Microservices Architecture (Kyle Ben...confluent
Tinder’s Quickfire Pipeline powers all things data at Tinder. It was originally built using AWS Kinesis Firehoses and has since been extended to use both Kafka and other event buses. It is the core of Tinder’s data infrastructure. This rich data flow of both client and backend data has been extended to service a variety of needs at Tinder, including Experimentation, ML, CRM, and Observability, allowing backend developers easier access to shared client side data. We perform this using many systems, including Kafka, Spark, Flink, Kubernetes, and Prometheus. Many of Tinder’s systems were natively designed in an RPC first architecture.
Things we’ll discuss decoupling your system at scale via event-driven architectures include:
– Powering ML, backend, observability, and analytical applications at scale, including an end to end walk through of our processes that allow non-programmers to write and deploy event-driven data flows.
– Show end to end the usage of dynamic event processing that creates other stream processes, via a dynamic control plane topology pattern and broadcasted state pattern
– How to manage the unavailability of cached data that would normally come from repeated API calls for data that’s being backfilled into Kafka, all online! (and why this is not necessarily a “good” idea)
– Integrating common OSS frameworks and libraries like Kafka Streams, Flink, Spark and friends to encourage the best design patterns for developers coming from traditional service oriented architectures, including pitfalls and lessons learned along the way.
– Why and how to avoid overloading microservices with excessive RPC calls from event-driven streaming systems
– Best practices in common data flow patterns, such as shared state via RocksDB + Kafka Streams as well as the complementary tools in the Apache Ecosystem.
– The simplicity and power of streaming SQL with microservices
Directory traversal, also known as path traversal, allows attackers to access files and directories outside of the web server's designated root folder. This can lead to attacks like file inclusion, where malicious code is executed on the server, and source code disclosure, where sensitive application code is revealed. Local file inclusion allows attackers to include files from the local web server, while remote file inclusion includes files from external websites, potentially allowing remote code execution on the vulnerable server.
Metrics for the Win: Using Micrometer to Understand Application BehaviorVMware Tanzu
"SpringOne Platform 2019
Session Title: Metrics for the Win: Using Micrometer to Understand Application Behavior
Speaker: Erin Schnabel, Senior Technical Staff Member, IBM
Youtube: https://youtu.be/_Vg4J9cdO6s
Event-Driven Messaging and Actions using Apache Flink and Apache NiFiDataWorks Summit
At Comcast, our team has been architecting a customer experience platform which is able to react to near-real-time events and interactions and deliver appropriate and timely communications to customers. By combining the low latency capabilities of Apache Flink and the dataflow capabilities of Apache NiFi we are able to process events at high volume to trigger, enrich, filter, and act/communicate to enhance customer experiences. Apache Flink and Apache NiFi complement each other with their strengths in event streaming and correlation, state management, command-and-control, parallelism, development methodology, and interoperability with surrounding technologies. We will trace our journey from starting with Apache NiFi over three years ago and our more recent introduction of Apache Flink into our platform stack to handle more complex scenarios. In this presentation we will compare and contrast which business and technical use cases are best suited to which platform and explore different ways to integrate the two platforms into a single solution.
Storage Capacity Management on Multi-tenant Kafka Cluster with Nurettin OmerogluHostedbyConfluent
"I will be presenting how we do the smart/automated capacity management on Multi-tenant Kafka cluster in Booking.com. It was a long journey. In this end to end story, I will be presenting what the issues were at the beginning, how we came up with a plan, designed, implemented, and applied to our existing clusters smoothly, now how the clients can monitor and even get alerted before their reserved capacity has been reached. What were the challenges and our learnings? What is next?
Why? In Booking.com, the infra team manages 60 different Kafka clusters with hundreds of topics in each. There are clusters running with hundred brokers. As there are hundreds of Kafka clients from tens of different departments, it is high likely some of the clients start abusing the cluster. Especially during peak times, when the retention was set as retention.ms, or when the underlying message size changes, it is hard to predict what would be the occupied storage in total. Finding the relevant clients, deciding which data to discard, dealing with so many unknowns in a short period of time can be hassle. Also these are not fun activities but just a toil for the team.
What? To avoid such boring issues, the team has chosen the path to build a smart mechanism and have quotas in place. It helped saving time developing new features instead of chasing people to resolve collisions. You can think that as an extension to the built-in throttling producer/consumer rate limits provided by the Apache Kafka, but it is much more than that. There are several components will be explained during the presentation one of them is our control plane (custom built) which manages the communication between clients and servers and does many things automated.
Another one is the Custom Policies that we plugged in on the Kafka side to validate the configuration even tried (malicious configuration) on the server side. The talk guarantees learning and shows examples of Kafka at scale problems in Booking.com."
Maximize Greenplum For Any Use Cases Decoupling Compute and Storage - Greenpl...VMware Tanzu
Greenplum's Platform Extension Framework (PXF) allows Greenplum to access external heterogeneous data. PXF acts as a federated query engine using built-in connectors to access various data sources and formats in parallel. This provides high throughput access to external data stored in places like Amazon S3, HDFS, SQL databases, and others. PXF provides a tabular view of external data and supports reading and writing external data stores.
This document provides an overview of Dell SonicWALL's next generation firewall solutions. It summarizes the company's history and leadership position in unified threat management firewall appliances. Key capabilities of SonicWALL's next generation firewall architecture are described at a high level, including deep packet inspection, application identification and control, single sign-on, and security services like intrusion prevention and SSL decryption. Common deployment scenarios are also outlined, such as traditional NAT gateway deployments, high availability configurations, and inline or wireless access point modes.
Streaming with Spring Cloud Stream and Apache Kafka - Soby ChackoVMware Tanzu
Spring Cloud Stream is a framework for building microservices that connect and integrate using streams of events. It supports Kafka, RabbitMQ, and other middleware. Kafka Streams is a client library for building stateful stream processing applications against Apache Kafka clusters. With Spring Cloud Stream, developers can write Kafka Streams applications using Java functions and have their code deployed and managed. This allows building stream processing logic directly against Kafka topics in a reactive, event-driven style.
ksqlDB is a stream processing SQL engine, which allows stream processing on top of Apache Kafka. ksqlDB is based on Kafka Stream and provides capabilities for consuming messages from Kafka, analysing these messages in near-realtime with a SQL like language and produce results again to a Kafka topic. By that, no single line of Java code has to be written and you can reuse your SQL knowhow. This lowers the bar for starting with stream processing significantly.
ksqlDB offers powerful capabilities of stream processing, such as joins, aggregations, time windows and support for event time. In this talk I will present how KSQL integrates with the Kafka ecosystem and demonstrate how easy it is to implement a solution using ksqlDB for most part. This will be done in a live demo on a fictitious IoT sample.
Transactional File System In Java - Commons TransactionGuo Albert
The document discusses using the Apache Commons Transaction library to provide transactional file system access in Java applications. It describes a scenario where the library could be used to manage concurrent access to resources and ensure data integrity. Key features and implementation steps of the Commons Transaction library are outlined, including initializing a FileResourceManager, starting a transaction, modifying resources, and transaction management.
OAuth 2.0 allows third party applications to access resources without sharing credentials. It uses grant types like authorization code and implicit grant to obtain an access token. The access token is then used by the client to access resources from the resource server. DataPower supports OAuth 2.0 and provides customization options like additional grant types and extension points to customize the OAuth handshake process.
Versioned State Stores in Kafka Streams with Victoria XiaHostedbyConfluent
"KIP-889 is the first in a sequence of KIPs to introduce versioned key-value stores into Kafka Streams. Versioned key-value stores enhance stateful processing capabilities by allowing users to store multiple record versions per key, rather than only the single latest version per key as is the case for existing key-value stores today. Storing multiple record versions per key unlocks use cases such as true temporal stream-table joins: when an out-of-order record arrives on the stream-side, Kafka Streams can produce the correct join result by looking ""back in time"" for the table state at the timestamp of the stream-side record. Foreign-key joins will see similar benefits, and users can also support custom use cases in their applications by running interactive queries to look up older record versions from versioned state stores, or by using them in custom processors.
This talk will introduce versioned state stores starting from the basics, discuss the stream-table join use case as motivation, operational considerations for users who'd like to use them, briefly touch on implementation in doing so, and also cover the timeline for when various pieces of functionality can be expected. By May 2023, KIP-889 will be complete and follow-up KIPs will be opened/in-progress."
FInal Project - USMx CC605x Cloud Computing for Enterprises - Hugo AquinoHugo Aquino
The document presents a final project analyzing the potential for a company to migrate its IT infrastructure from an on-premises data center to cloud computing on AWS. It finds that moving to AWS reserved instances could save over $3.5 million versus on-premises costs over 3 years, with a payback period of 14 months. It describes the company's need to scale efficiently and lower costs to support growth. A SWOT analysis and technical feasibility checklists are recommended before fully committing to a cloud migration.
With Apache Kafka 0.9, the community has introduced a number of features to make data streams secure. In this talk, we’ll explain the motivation for making these changes, discuss the design of Kafka security, and explain how to secure a Kafka cluster. We will cover common pitfalls in securing Kafka, and talk about ongoing security work.
Restoring Restoration's Reputation in Kafka Streams with Bruno Cadonna & Luca...HostedbyConfluent
The document discusses challenges with restoration in Kafka Streams applications and how the state updater improves restoration. It introduces the state updater, which runs restoration in parallel to processing to avoid blocking processing. This allows restoration checkpoints to be taken and avoids falling out of the consumer group if restoration is slow. Experiments show the state updater approach reduces restoration time and CPU usage compared to blocking restoration. The broader vision is for the state updater to support exactly-once semantics and multi-core scenarios.
Defending against Java Deserialization VulnerabilitiesLuca Carettoni
Java deserialization vulnerabilities have recently gained popularity due to a renewed interest from the security community. Despite being publicly discussed for several years, a significant number of Java based products are still affected. Whenever untrusted data is used within deserialization methods, an attacker can abuse this simple design anti-pattern to compromise your application. After a quick introduction of the problem, this talk will focus on discovering and defending against deserialization vulnerabilities. I will present a collection of techniques for mitigating attacks when turning off object serialization is not an option, and we will discuss practical recommendations that developers can use to help prevent these attacks.
Cusomizing Burp Suite - Getting the Most out of Burp ExtensionsAugust Detlefsen
The document discusses customizing Burp Suite by creating extensions using the Burp Extender API. It provides examples of building passive and active scanners, handling insertion points for active scanning, modifying requests through an HTTP listener, and debugging extensions. The goal is to customize Burp Suite functionality by adding new features through extensions.
This document provides an outline for a presentation on pentesting web applications with Burp Suite. It discusses using Burp Suite to scope a target, map content through spidering and directory bruteforcing, replace automated scanning with manual fuzzing using attack paylists, and test authentication through bruteforcing logins. Specific techniques covered include using the Burp spider, intruder, and engagement tools to discover content and hidden directories, importing wordlists to bruteforce hidden paths, and configuring intruder payloads and grep rules to analyze results from fuzzing and authentication testing.
Event Driven Architecture with a RESTful Microservices Architecture (Kyle Ben...confluent
Tinder’s Quickfire Pipeline powers all things data at Tinder. It was originally built using AWS Kinesis Firehoses and has since been extended to use both Kafka and other event buses. It is the core of Tinder’s data infrastructure. This rich data flow of both client and backend data has been extended to service a variety of needs at Tinder, including Experimentation, ML, CRM, and Observability, allowing backend developers easier access to shared client side data. We perform this using many systems, including Kafka, Spark, Flink, Kubernetes, and Prometheus. Many of Tinder’s systems were natively designed in an RPC first architecture.
Things we’ll discuss decoupling your system at scale via event-driven architectures include:
– Powering ML, backend, observability, and analytical applications at scale, including an end to end walk through of our processes that allow non-programmers to write and deploy event-driven data flows.
– Show end to end the usage of dynamic event processing that creates other stream processes, via a dynamic control plane topology pattern and broadcasted state pattern
– How to manage the unavailability of cached data that would normally come from repeated API calls for data that’s being backfilled into Kafka, all online! (and why this is not necessarily a “good” idea)
– Integrating common OSS frameworks and libraries like Kafka Streams, Flink, Spark and friends to encourage the best design patterns for developers coming from traditional service oriented architectures, including pitfalls and lessons learned along the way.
– Why and how to avoid overloading microservices with excessive RPC calls from event-driven streaming systems
– Best practices in common data flow patterns, such as shared state via RocksDB + Kafka Streams as well as the complementary tools in the Apache Ecosystem.
– The simplicity and power of streaming SQL with microservices
Directory traversal, also known as path traversal, allows attackers to access files and directories outside of the web server's designated root folder. This can lead to attacks like file inclusion, where malicious code is executed on the server, and source code disclosure, where sensitive application code is revealed. Local file inclusion allows attackers to include files from the local web server, while remote file inclusion includes files from external websites, potentially allowing remote code execution on the vulnerable server.
Metrics for the Win: Using Micrometer to Understand Application BehaviorVMware Tanzu
"SpringOne Platform 2019
Session Title: Metrics for the Win: Using Micrometer to Understand Application Behavior
Speaker: Erin Schnabel, Senior Technical Staff Member, IBM
Youtube: https://youtu.be/_Vg4J9cdO6s
Event-Driven Messaging and Actions using Apache Flink and Apache NiFiDataWorks Summit
At Comcast, our team has been architecting a customer experience platform which is able to react to near-real-time events and interactions and deliver appropriate and timely communications to customers. By combining the low latency capabilities of Apache Flink and the dataflow capabilities of Apache NiFi we are able to process events at high volume to trigger, enrich, filter, and act/communicate to enhance customer experiences. Apache Flink and Apache NiFi complement each other with their strengths in event streaming and correlation, state management, command-and-control, parallelism, development methodology, and interoperability with surrounding technologies. We will trace our journey from starting with Apache NiFi over three years ago and our more recent introduction of Apache Flink into our platform stack to handle more complex scenarios. In this presentation we will compare and contrast which business and technical use cases are best suited to which platform and explore different ways to integrate the two platforms into a single solution.
Storage Capacity Management on Multi-tenant Kafka Cluster with Nurettin OmerogluHostedbyConfluent
"I will be presenting how we do the smart/automated capacity management on Multi-tenant Kafka cluster in Booking.com. It was a long journey. In this end to end story, I will be presenting what the issues were at the beginning, how we came up with a plan, designed, implemented, and applied to our existing clusters smoothly, now how the clients can monitor and even get alerted before their reserved capacity has been reached. What were the challenges and our learnings? What is next?
Why? In Booking.com, the infra team manages 60 different Kafka clusters with hundreds of topics in each. There are clusters running with hundred brokers. As there are hundreds of Kafka clients from tens of different departments, it is high likely some of the clients start abusing the cluster. Especially during peak times, when the retention was set as retention.ms, or when the underlying message size changes, it is hard to predict what would be the occupied storage in total. Finding the relevant clients, deciding which data to discard, dealing with so many unknowns in a short period of time can be hassle. Also these are not fun activities but just a toil for the team.
What? To avoid such boring issues, the team has chosen the path to build a smart mechanism and have quotas in place. It helped saving time developing new features instead of chasing people to resolve collisions. You can think that as an extension to the built-in throttling producer/consumer rate limits provided by the Apache Kafka, but it is much more than that. There are several components will be explained during the presentation one of them is our control plane (custom built) which manages the communication between clients and servers and does many things automated.
Another one is the Custom Policies that we plugged in on the Kafka side to validate the configuration even tried (malicious configuration) on the server side. The talk guarantees learning and shows examples of Kafka at scale problems in Booking.com."
Maximize Greenplum For Any Use Cases Decoupling Compute and Storage - Greenpl...VMware Tanzu
Greenplum's Platform Extension Framework (PXF) allows Greenplum to access external heterogeneous data. PXF acts as a federated query engine using built-in connectors to access various data sources and formats in parallel. This provides high throughput access to external data stored in places like Amazon S3, HDFS, SQL databases, and others. PXF provides a tabular view of external data and supports reading and writing external data stores.
This document provides an overview of Dell SonicWALL's next generation firewall solutions. It summarizes the company's history and leadership position in unified threat management firewall appliances. Key capabilities of SonicWALL's next generation firewall architecture are described at a high level, including deep packet inspection, application identification and control, single sign-on, and security services like intrusion prevention and SSL decryption. Common deployment scenarios are also outlined, such as traditional NAT gateway deployments, high availability configurations, and inline or wireless access point modes.
Streaming with Spring Cloud Stream and Apache Kafka - Soby ChackoVMware Tanzu
Spring Cloud Stream is a framework for building microservices that connect and integrate using streams of events. It supports Kafka, RabbitMQ, and other middleware. Kafka Streams is a client library for building stateful stream processing applications against Apache Kafka clusters. With Spring Cloud Stream, developers can write Kafka Streams applications using Java functions and have their code deployed and managed. This allows building stream processing logic directly against Kafka topics in a reactive, event-driven style.
ksqlDB is a stream processing SQL engine, which allows stream processing on top of Apache Kafka. ksqlDB is based on Kafka Stream and provides capabilities for consuming messages from Kafka, analysing these messages in near-realtime with a SQL like language and produce results again to a Kafka topic. By that, no single line of Java code has to be written and you can reuse your SQL knowhow. This lowers the bar for starting with stream processing significantly.
ksqlDB offers powerful capabilities of stream processing, such as joins, aggregations, time windows and support for event time. In this talk I will present how KSQL integrates with the Kafka ecosystem and demonstrate how easy it is to implement a solution using ksqlDB for most part. This will be done in a live demo on a fictitious IoT sample.
Transactional File System In Java - Commons TransactionGuo Albert
The document discusses using the Apache Commons Transaction library to provide transactional file system access in Java applications. It describes a scenario where the library could be used to manage concurrent access to resources and ensure data integrity. Key features and implementation steps of the Commons Transaction library are outlined, including initializing a FileResourceManager, starting a transaction, modifying resources, and transaction management.
OAuth 2.0 allows third party applications to access resources without sharing credentials. It uses grant types like authorization code and implicit grant to obtain an access token. The access token is then used by the client to access resources from the resource server. DataPower supports OAuth 2.0 and provides customization options like additional grant types and extension points to customize the OAuth handshake process.
Versioned State Stores in Kafka Streams with Victoria XiaHostedbyConfluent
"KIP-889 is the first in a sequence of KIPs to introduce versioned key-value stores into Kafka Streams. Versioned key-value stores enhance stateful processing capabilities by allowing users to store multiple record versions per key, rather than only the single latest version per key as is the case for existing key-value stores today. Storing multiple record versions per key unlocks use cases such as true temporal stream-table joins: when an out-of-order record arrives on the stream-side, Kafka Streams can produce the correct join result by looking ""back in time"" for the table state at the timestamp of the stream-side record. Foreign-key joins will see similar benefits, and users can also support custom use cases in their applications by running interactive queries to look up older record versions from versioned state stores, or by using them in custom processors.
This talk will introduce versioned state stores starting from the basics, discuss the stream-table join use case as motivation, operational considerations for users who'd like to use them, briefly touch on implementation in doing so, and also cover the timeline for when various pieces of functionality can be expected. By May 2023, KIP-889 will be complete and follow-up KIPs will be opened/in-progress."
FInal Project - USMx CC605x Cloud Computing for Enterprises - Hugo AquinoHugo Aquino
The document presents a final project analyzing the potential for a company to migrate its IT infrastructure from an on-premises data center to cloud computing on AWS. It finds that moving to AWS reserved instances could save over $3.5 million versus on-premises costs over 3 years, with a payback period of 14 months. It describes the company's need to scale efficiently and lower costs to support growth. A SWOT analysis and technical feasibility checklists are recommended before fully committing to a cloud migration.
With Apache Kafka 0.9, the community has introduced a number of features to make data streams secure. In this talk, we’ll explain the motivation for making these changes, discuss the design of Kafka security, and explain how to secure a Kafka cluster. We will cover common pitfalls in securing Kafka, and talk about ongoing security work.
Restoring Restoration's Reputation in Kafka Streams with Bruno Cadonna & Luca...HostedbyConfluent
The document discusses challenges with restoration in Kafka Streams applications and how the state updater improves restoration. It introduces the state updater, which runs restoration in parallel to processing to avoid blocking processing. This allows restoration checkpoints to be taken and avoids falling out of the consumer group if restoration is slow. Experiments show the state updater approach reduces restoration time and CPU usage compared to blocking restoration. The broader vision is for the state updater to support exactly-once semantics and multi-core scenarios.
Defending against Java Deserialization VulnerabilitiesLuca Carettoni
Java deserialization vulnerabilities have recently gained popularity due to a renewed interest from the security community. Despite being publicly discussed for several years, a significant number of Java based products are still affected. Whenever untrusted data is used within deserialization methods, an attacker can abuse this simple design anti-pattern to compromise your application. After a quick introduction of the problem, this talk will focus on discovering and defending against deserialization vulnerabilities. I will present a collection of techniques for mitigating attacks when turning off object serialization is not an option, and we will discuss practical recommendations that developers can use to help prevent these attacks.
Cusomizing Burp Suite - Getting the Most out of Burp ExtensionsAugust Detlefsen
The document discusses customizing Burp Suite by creating extensions using the Burp Extender API. It provides examples of building passive and active scanners, handling insertion points for active scanning, modifying requests through an HTTP listener, and debugging extensions. The goal is to customize Burp Suite functionality by adding new features through extensions.
This document provides an outline for a presentation on pentesting web applications with Burp Suite. It discusses using Burp Suite to scope a target, map content through spidering and directory bruteforcing, replace automated scanning with manual fuzzing using attack paylists, and test authentication through bruteforcing logins. Specific techniques covered include using the Burp spider, intruder, and engagement tools to discover content and hidden directories, importing wordlists to bruteforce hidden paths, and configuring intruder payloads and grep rules to analyze results from fuzzing and authentication testing.
Getting the Most out of Burp Extensions. How to build a Burp extension, techniques for passive and active scanners, defining insertion points, modifying requests, and building GUI tools. This talk presents code libraries to make it easy for testers to rapidly customize Burp Suite.
This document provides an agenda for a presentation on web application pentesting and using Burp Suite. The presentation will include an overview of Burp Suite, how to get started with it, automated and manual testing techniques, and tips for web hacking. It will cover features of Burp like the proxy, spider, scanner, intruder, repeater, sequencer, and extender. The goal is to help attendees learn the foundation of using Burp Suite for web assessments.
1. Burp extensions can overcome web application hurdles through the Burp API. Interfaces like IMessageEditorTab and ITab allow creating new views of requests and responses, while processHTTPMessage and doPassiveScan can automate tasks by catching and rewriting traffic.
2. Examples include decoding custom encodings, signing requests, viewing unique response headers, and passively scanning for encoded values in cookies. Common problems are solved with minimal Python coding against the Burp API.
1. The Burp API allows extensions to overcome web application hurdles. Extensions can use IMessageEditorTab to decode custom encodings, processHTTPMessage to handle signed requests, ITab to provide new views of an application, and doPassiveScan to automate tasks with new scanner checks.
ITCamp 2012 - Mihai Nadas - Tackling the single sign-on challengeITCamp
The document discusses tackling the single sign-on challenge through claims-based identity and access control. It describes how claims-based identity works, benefits like simplified authentication and decoupled authorization. It also demonstrates configuring Windows Azure Access Control to provide single sign-on for an enterprise application, integrating identity providers and issuing normalized tokens.
Instant Payment Notification (IPN) is a messaging service that notifies users of events related to PayPal transactions. One can use IPN messages to automate back-office and administrative functions, such as fulfilling orders, tracking customers, and providing status and other transaction-related information.
How to Launch a Web Security Service in an HourCyren, Inc
Want to find out how to launch your very own web security service in less than an hour? We take a deep dive into the fastest growing security market, explore the limitations of existing solutions, and demonstrate how to take your Web security “to the cloud” today.
Ashish Gharti and Bijay Limbu Senihang are founders of Nep Security and IT security researchers who consult for Entrust Solution Nepal. SQL injection occurs when an attacker can influence SQL queries an application passes to a database, potentially allowing data leakage, site defacement, malware infection, or spear phishing. Defenses include addslashes(), mysql_real_escape_string(), is_numeric(), sprintf(), and htmlentities().
El documento describe los resultados de una investigación sobre la comunicación no verbal entre amigos. La investigación encontró que (1) los amigos demuestran afecto a través de expresiones faciales, contacto físico y tono de voz positivo, (2) comparten risas y bromas, y (3) se sienten cómodos expresando emociones como tristeza con el otro.
Pyscho-Strategies for Social EngineeringIshan Girdhar
This document discusses techniques for social engineering and influencing human behavior. It explains that people are not fully in control of their own actions and reactions, as many behaviors are hardwired. It then provides examples of psychological tactics that can be used to influence or control a situation by leveraging an understanding of human psychology, such as limiting options, using deadlines, inertia, expectations, and associating yourself with pleasant experiences. The document cautions that these techniques should not be used to harm or deceive others.
Burp Suite adalah perangkat keamanan gratis yang berguna untuk melakukan pengujian penetrasi web. Terdiri dari beberapa alat seperti proxy, spider, intruder, repeater, sequencer, dan decoder yang memungkinkan penangkapan dan modifikasi lalu lintas jaringan serta otomatisasi uji coba serangan."
This document discusses different versioning strategies for cloud services. It presents strategies for versioning production and staging environments, isolating environments for different roles like QA and developers, using separate subscriptions to isolate environments and billing, and approaches for versioning SQL databases and WCF contracts. The key strategies covered include using slots or instances to separate environments, federating SQL databases by tenant or version, and supporting multiple versions of WCF contracts through single or multiple endpoints. References are provided for further reading on managing cloud services, versioning SQL databases, and WCF versioning strategies.
The document discusses testing the security of web services. It provides an overview of Windows Communication Foundation (WCF), explaining that it is Microsoft's framework for building networked applications and supports different protocols. It also discusses important concepts for WCF like addresses, bindings and contracts. The document then provides recommendations for tools to test WCF services, including WcfTestClient, WCF Storm and WSFuzzer, and discusses techniques like leveraging metadata and secure bindings.
Web services present unique challenges for penetration testing due to their complexity and differences from traditional web applications. There is a lack of standardized testing methodology and tools for web services. Many penetration testers are unsure how to properly scope and test web services. Existing tools have limitations and testing environments must often be built from scratch. A thorough understanding of web service standards and frameworks is needed to effectively test for vulnerabilities from both the client and server side.
The document discusses security patterns and practices for Windows Communication Foundation (WCF) services. It begins with an introduction to service-oriented architecture and WCF. It then covers defining web service threats, an overview of basic WCF security concepts like authentication, authorization, and encryption. The document discusses securing the transport channel and message integrity. It provides recommendations for secure configuration, appropriate bindings, and code-based best practices. Throughout, it emphasizes the importance of combining multiple security techniques and technologies to achieve security at the highest level.
A story of how we went about packaging perl and all of the dependencies that our project has.
Where we were before, the chosen path, and the end result.
The pitfalls and a view on the pros and cons of the previous state of affairs versus the pros/cons of the end result.
A short introduction to the more advanced python and programming in general. Intended for users that has already learned the basic coding skills but want to have a rapid tour of more in-depth capacities offered by Python and some general programming background.
Execrices are available at: https://github.com/chiffa/Intermediate_Python_programming
The document provides an overview of core Java concepts including:
- Java is an object-oriented programming language and platform that runs on a virtual machine. It is used to create desktop, web, enterprise, mobile and other applications.
- Core Java concepts include objects, classes, inheritance, polymorphism, abstraction and encapsulation. The document also discusses variables and data types, OOP principles, object creation, method overloading and constructors.
- It provides examples of Hello World programs and explains Java memory areas like stack and heap. Key topics like static keyword, method vs constructor and method overloading are also summarized.
Steelcon 2014 - Process Injection with Pythoninfodox
This is the slides to accompany the talk given by Darren Martyn at the Steelcon security conference in July 2014 about process injection using python.
Covers using Python to manipulate processes by injecting code on x86, x86_64, and ARMv7l platforms, and writing a stager that automatically detects what platform it is running on and intelligently decides which shellcode to inject, and via which method.
The Proof of Concept code is available at https://github.com/infodox/steelcon-python-injection
[HES2013] Virtually secure, analysis to remote root 0day on an industry leadi...Hackito Ergo Sum
Today most networks present one “gateway” to the whole network – The SSL-VPN. A vector that is often overlooked and considered “secure”, we decided to take apart an industry leading SSL-VPN appliance and analyze it to bits to thoroughly understand how secure it really is. During this talk we will examine the internals of the F5 FirePass SSL-VPN Appliance. We discover that even though many security protections are in-place, the internals of the appliance hides interesting vulnerabilities we can exploit. Through processes ranging from reverse engineering to binary planting, we decrypt the file-system and begin examining the environment. As we go down the rabbit hole, our misconceptions about “security appliances” are revealed.
Using a combination of web vulnerabilities, format string vulnerabilities and a bunch of frustration, we manage to overcome the multiple limitations and protections presented by the appliance to gain a remote unauthenticated root shell. Due to the magnitude of this vulnerability and the potential for impact against dozens of fortune 500 companies, we contacted F5 and received one of the best vendor responses we’ve experienced – EVER!
https://www.hackitoergosum.org
This document provides an overview of the fundamentals of Java, including its history, key concepts, and basic programming structures. It discusses Java's origins in 1995 as Oak, its bytecode and JVM execution environment, and basic data types. The document also demonstrates a simple "Hello World" Java program and covers topics like variables, operators, control flow, and projects.
The document discusses bytecode and the Java Virtual Machine (JVM). It provides an example of decompiling the "Hello World" Java program using javap to view the bytecode instructions. It also covers bytecode fundamentals like the stack machine model, instruction types, and how the operand stack and frames work. Finally, it demonstrates some common stack manipulation instructions.
This document describes EhTrace, a tool for tracing the execution of binary programs through hooking and branch stepping. EhTrace uses the Windows VEH exception handler to single step programs by setting flags in the CPU context. It can be used to analyze program control flow at the basic block level for purposes like malware analysis, debugging, and code coverage. The document discusses techniques for maintaining control during tracing and fighting attempts by the target program to detect or alter the tracing.
This document provides an introduction to JVM bytecode, including how to inspect, generate, and understand bytecode. It discusses two main parts - JVM bytecode itself such as basic instructions and stack operations, and the JVM JIT compiler which compiles bytecode to machine code. Various tools for working with bytecode like javap and ASM are also introduced. The document is intended to help readers gain a better understanding of how the Java platform works from the lowest level.
This document provides an overview of using the OllyDbg debugger to analyze malware. It discusses OllyDbg's history and interface, how to load and debug malware using OllyDbg, setting breakpoints, tracing code execution, patching code, and analyzing shellcode. The key points are that OllyDbg is an effective tool for debugging malware, it allows setting different breakpoint types, tracing helps record execution, and shellcode can be directly analyzed by pasting it into OllyDbg memory.
Habitat is a tool for building and running distributed applications. It aims to standardize packaging and running applications across different environments. With Habitat, applications are packaged into "harts" which contain all their dependencies and can be run on any system. Habitat handles configuration, service discovery, and updates to provide a uniform way to deploy applications. Plans are used to define how to build harts in a reproducible way. The Habitat runtime then manages running applications as services.
This presentation was given as a Workshop at OSCON 2014.
New to Go? This tutorial will give developers an introduction and practical
experience in building applications with the Go language. Gopher Steve Francia,
Author of [Hugo](http://hugo.spf13.com),
[Cobra](http://github.com/spf13/cobra), and many other popular Go packages
breaks it down step by step as you build your own full featured Go application.
Starting with an introduction to the Go language. He then reviews the fantastic
go tools available. With our environment ready we will learn by doing. The
remainder of the time will be dedicated to building a working go web and cli
application. Through our application development experience we will introduce
key features, libraries and best practices of using Go.
This tutorial is designed with developers in mind. Prior experience with any of the
following languages: ruby, perl, java, c#, javascript, php, node.js, or python
is preferred. We will be using the MongoDB database as a backend for our
application.
We will be using/learning a variety of libraries including:
* bytes and strings
* templates
* net/http
* io, fmt, errors
* cobra
* mgo
* Gin
* Go.Rice
* Cobra
* Viper
This document summarizes a presentation about a new way of developing Perl applications and the future of gperl, a fast Perl-like language. It discusses compiler modules for lexical analysis, parsing, and code generation that were originally developed for gperl and can now be used to build various tools and applications. These include a transpiler to run Perl 5 code in web browsers, a framework called PerlMotion for building iOS and OSX apps with Perl, and a static analysis tool for detecting copied code. The presentation encourages contributions to related open source projects and outlines plans to expand the capabilities of the static analysis and type inference engines.
This document provides an introduction to the Java programming language. It discusses key Java concepts like high-level vs low-level languages, common programming languages, how Java works by compiling to bytecode and using a virtual machine, and why Java was created. It also includes a simple "Hello World" Java program example to demonstrate Java syntax and how to compile and run a Java program.
This document provides an introduction to the Java programming language. It discusses the differences between high-level and low-level languages. It also lists several common programming languages and describes key features of Java, including how it works, why it was created, how programs are compiled and run, and how to write a simple "Hello World" program in Java.
This document provides an introduction to the Java programming language. It discusses the differences between high-level and low-level programming languages. It also lists several common programming languages and describes key features of Java, including how Java code is compiled into bytecode that can run on any device with a Java Virtual Machine. The document concludes with examples of "Hello World" programs written in Java.
Java is a widely used programming language that is mainly used for application programming. It is platform-independent and supports features like multi-threading and documentation comments. The key aspects of a simple Java program are that it must contain a class with a main method that can be the starting point of execution. The main method has a specific signature of public static void main(String[] args). When a Java program is run, the JVM (Java Virtual Machine) loads and executes the program by performing operations like loading code, verifying code, and providing a runtime environment.
Securiport is a border security systems provider with a progressive team approach to its task. The company acknowledges the importance of specialized skills in creating the latest in innovative security tech. The company has offices throughout the world to serve clients, and its employees speak more than twenty languages at the Washington D.C. headquarters alone.
6th Power Grid Model Meetup
Join the Power Grid Model community for an exciting day of sharing experiences, learning from each other, planning, and collaborating.
This hybrid in-person/online event will include a full day agenda, with the opportunity to socialize afterwards for in-person attendees.
If you have a hackathon proposal, tell us when you register!
About Power Grid Model
The global energy transition is placing new and unprecedented demands on Distribution System Operators (DSOs). Alongside upgrades to grid capacity, processes such as digitization, capacity optimization, and congestion management are becoming vital for delivering reliable services.
Power Grid Model is an open source project from Linux Foundation Energy and provides a calculation engine that is increasingly essential for DSOs. It offers a standards-based foundation enabling real-time power systems analysis, simulations of electrical power grids, and sophisticated what-if analysis. In addition, it enables in-depth studies and analysis of the electrical power grid’s behavior and performance. This comprehensive model incorporates essential factors such as power generation capacity, electrical losses, voltage levels, power flows, and system stability.
Power Grid Model is currently being applied in a wide variety of use cases, including grid planning, expansion, reliability, and congestion studies. It can also help in analyzing the impact of renewable energy integration, assessing the effects of disturbances or faults, and developing strategies for grid control and optimization.
Soulmaite review - Find Real AI soulmate reviewSoulmaite
Looking for an honest take on Soulmaite? This Soulmaite review covers everything you need to know—from features and pricing to how well it performs as a real AI soulmate. We share how users interact with adult chat features, AI girlfriend 18+ options, and nude AI chat experiences. Whether you're curious about AI roleplay porn or free AI NSFW chat with no sign-up, this review breaks it down clearly and informatively.
Developing Schemas with FME and Excel - Peak of Data & AI 2025Safe Software
When working with other team members who may not know the Esri GIS platform or may not be database professionals; discussing schema development or changes can be difficult. I have been using Excel to help illustrate and discuss schema design/changes during meetings and it has proven a useful tool to help illustrate how a schema will be built. With just a few extra columns, that Excel file can be sent to FME to create new feature classes/tables. This presentation will go thru the steps needed to accomplish this task and provide some lessons learned and tips/tricks that I use to speed the process.
Co-Constructing Explanations for AI Systems using ProvenancePaul Groth
Explanation is not a one off - it's a process where people and systems work together to gain understanding. This idea of co-constructing explanations or explanation by exploration is powerful way to frame the problem of explanation. In this talk, I discuss our first experiments with this approach for explaining complex AI systems by using provenance. Importantly, I discuss the difficulty of evaluation and discuss some of our first approaches to evaluating these systems at scale. Finally, I touch on the importance of explanation to the comprehensive evaluation of AI systems.
ELNL2025 - Unlocking the Power of Sensitivity Labels - A Comprehensive Guide....Jasper Oosterveld
Sensitivity labels, powered by Microsoft Purview Information Protection, serve as the foundation for classifying and protecting your sensitive data within Microsoft 365. Their importance extends beyond classification and play a crucial role in enforcing governance policies across your Microsoft 365 environment. Join me, a Data Security Consultant and Microsoft MVP, as I share practical tips and tricks to get the full potential of sensitivity labels. I discuss sensitive information types, automatic labeling, and seamless integration with Data Loss Prevention, Teams Premium, and Microsoft 365 Copilot.
Domino IQ – Was Sie erwartet, erste Schritte und Anwendungsfällepanagenda
Webinar Recording: https://www.panagenda.com/webinars/domino-iq-was-sie-erwartet-erste-schritte-und-anwendungsfalle/
HCL Domino iQ Server – Vom Ideenportal zur implementierten Funktion. Entdecken Sie, was es ist, was es nicht ist, und erkunden Sie die Chancen und Herausforderungen, die es bietet.
Wichtige Erkenntnisse
- Was sind Large Language Models (LLMs) und wie stehen sie im Zusammenhang mit Domino iQ
- Wesentliche Voraussetzungen für die Bereitstellung des Domino iQ Servers
- Schritt-für-Schritt-Anleitung zur Einrichtung Ihres Domino iQ Servers
- Teilen und diskutieren Sie Gedanken und Ideen, um das Potenzial von Domino iQ zu maximieren
AI Agents in Logistics and Supply Chain Applications Benefits and ImplementationChristine Shepherd
AI agents are reshaping logistics and supply chain operations by enabling automation, predictive insights, and real-time decision-making across key functions such as demand forecasting, inventory management, procurement, transportation, and warehouse operations. Powered by technologies like machine learning, NLP, computer vision, and robotic process automation, these agents deliver significant benefits including cost reduction, improved efficiency, greater visibility, and enhanced adaptability to market changes. While practical use cases show measurable gains in areas like dynamic routing and real-time inventory tracking, successful implementation requires careful integration with existing systems, quality data, and strategic scaling. Despite challenges such as data integration and change management, AI agents offer a strong competitive edge, with widespread industry adoption expected by 2025.
Exploring the advantages of on-premises Dell PowerEdge servers with AMD EPYC processors vs. the cloud for small to medium businesses’ AI workloads
AI initiatives can bring tremendous value to your business, but you need to support your new AI workloads effectively. That means choosing the best possible infrastructure for your needs—and many companies are finding that the cloud isn’t right for them. According to a recent Rackspace survey of IT executives, 69 percent of companies have moved some of their applications on-premises from the cloud, with half of those citing security and compliance as the reason and 44 percent citing cost.
On-premises solutions provide a number of advantages. With full control over your security infrastructure, you can be certain that all compliance requirements remain firmly in the hands of your IT team. Opting for on-premises also gives you the ability to design your infrastructure to the precise needs of that team and your new AI workloads. Depending on the workload, you may also see performance benefits, along with more predictable costs. As you start to build your next AI initiative, consider an on-premises solution utilizing AMD EPYC processor-powered Dell PowerEdge servers.
Jeremy Millul - A Talented Software DeveloperJeremy Millul
Jeremy Millul is a talented software developer based in NYC, known for leading impactful projects such as a Community Engagement Platform and a Hiking Trail Finder. Using React, MongoDB, and geolocation tools, Jeremy delivers intuitive applications that foster engagement and usability. A graduate of NYU’s Computer Science program, he brings creativity and technical expertise to every project, ensuring seamless user experiences and meaningful results in software development.
In this talk, Elliott explores how developers can embrace AI not as a threat, but as a collaborative partner.
We’ll examine the shift from routine coding to creative leadership, highlighting the new developer superpowers of vision, integration, and innovation.
We'll touch on security, legacy code, and the future of democratized development.
Whether you're AI-curious or already a prompt engineering, this session will help you find your rhythm in the new dance of modern development.
Establish Visibility and Manage Risk in the Supply Chain with Anchore SBOMAnchore
Over 70% of any given software application consumes open source software (most likely not even from the original source) and only 15% of organizations feel confident in their risk management practices.
With the newly announced Anchore SBOM feature, teams can start safely consuming OSS while mitigating security and compliance risks. Learn how to import SBOMs in industry-standard formats (SPDX, CycloneDX, Syft), validate their integrity, and proactively address vulnerabilities within your software ecosystem.
Improving Developer Productivity With DORA, SPACE, and DevExJustin Reock
Ready to measure and improve developer productivity in your organization?
Join Justin Reock, Deputy CTO at DX, for an interactive session where you'll learn actionable strategies to measure and increase engineering performance.
Leave this session equipped with a comprehensive understanding of developer productivity and a roadmap to create a high-performing engineering team in your company.
AI Creative Generates You Passive Income Like Never BeforeSivaRajan47
For years, building passive income meant traditional routes—stocks, real estate, or
online businesses that required endless hours of setup and maintenance. But now,
Artificial Intelligence (AI) is redefining the landscape. We’re no longer talking about
automation in the background; we’re entering a world where AI creatives actively
design, produce, and monetize content and products, opening the floodgates for
passive income like never before.
Imagine AI tools writing books, designing logos, building apps, editing videos, creating
music, and even selling your digital products 24/7—without you lifting a finger after
setup. This isn't the future. It’s happening right now. And if you act fast, you can ride
the wave before it becomes saturated.
In this in-depth guide, we’ll show you how to tap into AI creativity for real, sustainable,
passive income streams—no fluff, no generic tips—just actionable, traffic-driving
insights.
Trends Artificial Intelligence - Mary MeekerClive Dickens
Mary Meeker’s 2024 AI report highlights a seismic shift in productivity, creativity, and business value driven by generative AI. She charts the rapid adoption of tools like ChatGPT and Midjourney, likening today’s moment to the dawn of the internet. The report emphasizes AI’s impact on knowledge work, software development, and personalized services—while also cautioning about data quality, ethical use, and the human-AI partnership. In short, Meeker sees AI as a transformative force accelerating innovation and redefining how we live and work.
Top 25 AI Coding Agents for Vibe Coders to Use in 2025.pdfSOFTTECHHUB
I've tested over 50 AI coding tools in the past year, and I'm about to share the 25 that actually work. Not the ones with flashy marketing or VC backing – the ones that will make you code faster, smarter, and with way less frustration.
Top 25 AI Coding Agents for Vibe Coders to Use in 2025.pdfSOFTTECHHUB
Burp plugin development for java n00bs (44 con)
1. Burp Plugin Development for
Java n00bs
44Con 2012
www.7elements.co.uk | blog.7elements.co.uk | @7elements
2. /me
• Marc Wickenden
• Principal Security Consultant at 7 Elements
• Love coding (particularly Ruby)
• @marcwickenden on the Twitterz
• Most importantly though…..
www.7elements.co.uk | blog.7elements.co.uk | @7elements
4. If you already know Java
You’re either:
• In the wrong room
• About to be really offended!
5. Agenda
• The problem
• Getting ready
• Introduction to the Eclipse IDE
• Burp Extender Hello World!
• Manipulating runtime data
• Decoding a custom encoding scheme
• “Shelling out” to other scripts
• Limitations of Burp Extender
• Really cool Burp plugins already out there to fire
your imagination
8. The problem
• Burp Suite is awesome
• De facto web app tool
• Open source alternatives don’t compare
IMHO
• Tools available/cohesion/protocol support
• Burp Extender
11. How? - Burp Extender
• “allows third-party developers to extend the
functionality of Burp Suite”
• “Extensions can read and modify Burp’s
runtime data and configuration”
• “initiate key actions”
• “extend Burp’s user interface”
http://portswigger.net/burp/extender/
13. Java 101
• Java source is compiled to bytecode (class file)
• Runs on Java Virtual Machine (JVM)
• Class-based
• OO
• Write once, run anywhere (WORA)
• Two distributions: JRE and JDK
14. Java 101 continued…
• Usual OO stuff applies:
objects, classes, methods, properties/variable
s
• Lines end with ;
15. Java 101 continued…
• Source files must be named after the public
class they contain
• public keyword denotes method can be called
from code in other classes or outside class
hierarchy
16. Java 101 continued…
• class hierarchy defined by directory structure:
• uk.co.sevenelements.HelloWorld =
uk/co/sevenelements/HelloWorld.class
• JAR file is essentially ZIP file of
classes/directories
17. Java 101 continued…
• void keyword indicates method will not return
data to the caller
• main method called by Java launcher to pass
control to the program
• main must accept array of String objects (args)
18. Java 101 continued…
• Java loads class (specified on CLI or in JAR
META-INF/MANIFEST.MF) and starts public
static void main method
• You’ve seen this already with Burp:
– java –jar burpsuite_pro_v1.4.12.jar
22. First we need some tools
• Eclipse IDE – de facto free dev tool for Java
• Not necessarily the best or easiest thing to use
• Alternatives to consider:
– Jet Brains IntelliJ (my personal favourite)
– NetBeans (never used)
– Jcreator (again, never used)
– Terminal/vim/javac < MOAR L33T
25. Java JDK
• Used to be bundled with Eclipse
• Due to licensing (I think) this is no longer the
case
• Grab from Sun Oracle’s website:
• http://download.oracle.com/otn-pub/java/jdk/7u7-b11/jdk-7u7-windows-
x64.exe?AuthParam=1347522941_2b61ee3cd1f38a0abd1be312c3990fe5
27. Create a Java Project
• File > New > Java Project
• Project Name: Burp Hello World!
• Leave everything else as default
• Click Next
29. Java Settings
• Click on Libraries tab
• Add External JARs
• Select your burpsuite.jar
• Click Finish
30. Create a new package
• File > New > Package
• Enter burp as the name
• Click Finish
31. Create a new file
• Right-click burp package > New > File
• Accept the default location of src
• Enter BurpExtender.java as the filename
• Click Finish
34. Loading external classes
• We need to tell Java about external classes
– Ruby has require
– PHP has include or require
– Perl has require
– C has include
– Java uses import
35. Where is Burp?
• We added external JARs in Eclipse
• Only helps at compilation
• Need to tell our code about classes
– import burp.*;
36. IBurpExtender
• Available at
http://portswigger.net/burp/extender/burp/IBurpExtender.html
– “ Implementations must be called BurpExtender,
in the package burp, must be declared public, and
must provide a default (public, no-argument)
constructor”
37. In other words
public class BurpExtender
{
}
• Remember, Java makes you name files after
the class so that’s why we named it
BurpExtender.java
38. Add this
package burp;
import burp.*;
public class BurpExtender
{
public void processHttpMessage(
String toolName,
boolean messageIsRequest,
IHttpRequestResponse messageInfo) throws Exception
{
System.out.println("Hello World!");
}
}
39. Run the program
• Run > Run
• First time we do this it’ll ask what to run as
• Select Java Application
45. What’s happening?
• Why is it spamming “Hello World!” to the
console?
• We defined processHttpMessage()
• http://portswigger.net/burp/extender/burp/IB
urpExtender.html
– “This method is invoked whenever any of Burp's
tools makes an HTTP request or receives a
response”
47. RepeatAfterMeClient.exe
processProxyMessage
processHttpMessage
Burp Suite
http://wcfbox/RepeaterService.svc
49. We’ve got to do a few things
• Split the HTTP Headers from FI body
• Decode FI body
• Display in Burp
• Re-encode modified version
• Append to headers
• Send to web server
• Then the same in reverse
51. • Right-click Project > Build Path > Add External
Archives
• Select FastInfoset.jar
• Note that imports are now yellow
61. Running outside of Eclipse
• Plugin is working nicely, now what?
• Export to JAR
• Command line to run is:
• java –jar yourjar.jar;burp_pro_v1.4.12.jar burp.startBurp
62. Limitations
• We haven’t coded to handle/decode the
response
• Just do the same in reverse
• processHttpMessage fires before
processProxyMessage so we can’t alter then
re-encode message
• Solution: chain two Burp instances together
63. Attribution
• All lolcatz courtesy of lolcats.com
• No cats were harming in the making of this
workshop
• Though some keyboards were….
#5: In the wrong roomAbout to be really offendedI don’t know much about Java, I don’t know the right terms for things and I don’t know the best style of writing it. But this code will work and that’s my primary objective today.It don’t have to be pretty, it just has to work. That’s the difference between delivering a good test or a bad one imho
#10: Previous app testWCF Service written in C#Not using WCF Binary protocolSOAP with Fastinfoset XML encodingBurp Suite couldn’t read it
#23: IntelliJ Community Edition is availableWe’re going with Eclipse because it works and is free and fully functionalYou can port this learning to anything else
#27: Package Explorer – like a directory listing of your classes and src filesMain window where we edit filesTask list – I normally close this to be honestOutline view, quite useful, gives a break down of methods, properties of classes you are working onProblems – keep your eye on this bad boy, can be very useful
#36: Notice how it’s already popping up little tips. In this case we’ve declared an import but not used any of the classes.We’ll fix that…
#37: Javadoc is the Java standard for documentation. It is generated automatically from comments in the code.Burp Extender has javadoc available online. We are going to use this a lot.Let’s start…..er, right….
#38: This is our bare bones. Note the import burp.*; isn’t shown
#39: Don’t worry too much about what it all means just at the secondhttps://github.com/7Elements/burp_workshop/tree/master/Burp%20Hello%20World!
#58: That’s great, writing out to the console – but we need to intercept and send onwardsWe need to shuffle stuff around a bit then..https://github.com/7Elements/burp_workshop/tree/master/Burp%20Fastinfoset%20Decoder%20-%20Take%20Three
#59: Walk through adding code to processProxyMessageShow how we can decode in the Burp Proxy window by returning new byte[]Then how it fails because the app receives plain text not FI
#60: Now we add a re-encode method to the processHttpMessage using custom HTTP headerWe can exploit the flow order in Burp.Remember proxyProxyMessage is called *before* processHttpMessage– winhttps://github.com/7Elements/burp_workshop/tree/master/Burp%20Fastinfoset%20Decoder%20-%20Take%20Four