Avsnitt

  • In this episode, hosts Lois Houston and Nikita Abraham discuss improvements in time and data handling and data storage in Oracle Database 23ai. They are joined by Senior Principal Instructor Serge Moiseev, who explains the benefit of allowing databases to have their own time zones, separate from the host operating system. Serge also highlights two data storage improvements: Automatic SecureFiles Shrink, which optimizes disk space usage, and Automatic Storage Compression, which enhances database performance and efficiency. These features aim to reduce the reliance on DBAs and improve overall database management. Oracle MyLearn: https://mylearn.oracle.com/ou/course/oracle-database-23ai-new-features-for-administrators/137192/207062 Oracle University Learning Community: https://education.oracle.com/ou-community LinkedIn: https://www.linkedin.com/showcase/oracle-university/ X: https://twitter.com/Oracle_Edu Special thanks to Arijit Ghosh, David Wright, and the OU Studio Team for helping us create this episode. --------------------------------------------------------- Episode Transcript:

    00:00

    Welcome to the Oracle University Podcast, the first stop on your cloud journey. During this series of informative podcasts, we’ll bring you foundational training on the most popular Oracle technologies. Let’s get started!

    00:26

    Nikita: Welcome to the Oracle University Podcast! I’m Nikita Abraham, Principal Technical Editor with Oracle University, and joining me is Lois Houston, Director of Innovation Programs.

    Lois: Hi there! Over the past two weeks, we've delved into database sharding, exploring what it is, Oracle Database Sharding, its benefits, and architecture. We’ve also examined each new feature in Oracle Database 23ai related to sharding. If that sounds intriguing to you, make sure to check out those episodes. And just to remind you, even though most of you already know, 23ai was previously known as 23c.

    01:04

    Nikita: That’s right, Lois. In today’s episode, we’re going to talk about the 23ai improvements in time and data handling and data storage with one of our Senior Principal Instructors at Oracle University, Serge Moiseev. Hi Serge! Thanks for joining us today. Let’s start with time and data handling. I know there are two new changes here in 23ai: the enhanced time zone data upgrade and the improved system data and system timestamp data handling. What are some challenges associated with time zone data in databases?

    01:37

    Serge: Time zone definitions change from time to time due to legislative reasons. There are certain considerations. Changes include daylight savings time when we switch, include the activity that affects the Oracle Database time zone files.
    Time zone files are modified and used by the administrators. Customers select the time zone file to use whenever it's appropriate. And customers can manage the upgrade whenever it happens.

    The upgrades affect columns of type TIMESTAMP with TIME ZONE. Now, the upgrades can be online or offline.

    02:24

    Lois: And how have we optimized this process now?

    Serge: Oracle Database 23c improves the upgrade by reducing the resources used, by selectively using the updates and minimizing the application impact. And only the data that has dependencies on the time zone would be impacted by the upgrade.

    The optimization of the time zone file upgrade does not really change the upgrade process, so upgrade can be done offline. Database would be unavailable for a prolonged period of time, which is not optimal for today's database availability requirements.

    Online upgrade, in this case, we want to minimize the application impact while the data is being upgraded. With the 23c database enhancement for time zone file change handling, the modified data is minimized, which means that the database updates only impacted rows. And it reduces the impact to the applications and other database operations.

    03:40

    Nikita: Serge, how does updating only the impacted rows improve the efficiency of the upgrade process?

    Serge: The benefits of enhanced timezone update include customers who manage large fleet of databases. They will benefit tremendously with a lower downtime. The DBAs will benefit due to the faster updates and less resource consumption needed to apply those updates. And that improves the efficiency of the update process.

    Tables with no affected data are simply skipped and not touched. All results in the significant resource savings on the upgrade of the time zone files. It applies to all customers that utilize timestamp with time zone columns for their data storage.

    04:32

    Lois: Excellent! Now, what can you tell us about the improved system data and system timestamp data handling?

    Serge: Date and time in Oracle databases depends on the system time as well as the database settings. System time now can be set as the local time zone for an individual database.

    04:53

    Nikita: How was it before this update?

    Serge: Before 23c, the time has always matched the time zone of the database host operating system. Now, imagine that we use either multitenant environments or cloud-based environments when the host OS system time zone is not really the same as the application that runs in a different geographic locality or affects data from other locations.

    And system time obviously applies not only to the data stored and updated in the database rows but also to the scheduler, the flashback, to a place to materialized view refresh, Recovery Manager, and other time-sensitive features in the database itself.

    Now, with the database time versus operating system time, there is a need to be more selective. It is desired that the applications use the same database time in the same time zone as the applications are actually being used in.

    And multitenant and cloud databases will certainly experience a mismatch between the host operating system time zone, which is not local for the applications that run in some other geographical locations or not recognizing some, for example, daylight savings time.

    So migration challenge is obviously present. If you want to migrate from a specific on-premises database to either multitenant or cloud, you would experience the host operating system time zone by default.

    06:38

    Lois: And that’s obviously not convenient for the applications, right?

    Serge: Well, the database-specific time in Oracle Database 23c, any cloud database can set local time zone to whatever the customer's requirements are explicitly. And any pluggable database can also set its own local time zone to customer's requirements, not inheriting the time zone from the container database it is currently running in.

    This simplifies migration to multitenant or cloud for applications that are time-sensitive. And it offers more intuitive, easier database monitoring, and development.

    07:23

    Working towards an Oracle Certification this year? Take advantage of the Certification Prep live events in the Oracle University Learning Community. Get tips from OU experts and hear from others who have already taken their certifications. Once you’re certified, you’ll gain access to an exclusive forum for Oracle-certified users. What are you waiting for? Visit mylearn.oracle.com to get started.

    07:51

    Nikita: Welcome back! Let’s move on to the data storage improvements. We have two updates here as well, automatic secure file shrink and automatic storage compression. Let’s start with the first one. But before we get into it, Serge, can you explain what SecureFiles are?

    Serge: SecureFiles are the default storage mechanism for large objects in Oracle Database. They are strongly recommended by Oracle to store and manage large object data.

    The LOBs are stored in segments. Those segments may incur large amounts of free space over time. Because of the updates to the LOB data, the fragmentation of the space used is growing depending, of course, on the frequency and the scope of the updates.

    The storage efficiency could be improved by shrinking segments with the free space removed. And manual secure files shrinking has become available since Oracle Database 21c, requiring administrators to perform these tasks manually.
    Traditional SecureFiles required the time-consuming DBA activities. DBAs would need to manually identify eligible LOB segments either using Segment Advisor or PL/SQL or built-in database views.

    Once identified, the administrators would manually execute shrink operations on very large LOBs which takes too much time and may result in excessive disk space consumption. For example, code to operate this shrinking would look like ALTER TABLE some table SHRINK SPACE CASCADE.

    That would shrink all LOB segments in a particular table. If you want to scope the shrinking to a single column, the code would be required to ALTER TABLE some table MODIFY LOB, followed by the column name SHRINK SPACE.

    This affects only a single column in a table with LOBs.

    10:01

    Lois: So, how has automatic secure shrinking made things better?

    Serge: Automatic SecureFile shrink removes the emphasis from the DBAs to manually perform these tasks. And it results in the more optimal use of space over time.

    It is integrated into the automated database maintenance tasks. The automation once enabled runs every 30 minutes, collects eligible LOB segments, and shrinks them offline. The execution time and freed space would vary depending on the fragmentation and the size of the LOBs. Each shrink execution may reclaim up to 5 gigabytes of unused disk space from each LOB segment that is idle.

    On the high level, automatic SecureFile shrink improves the Oracle Database 23c storage usage efficiency. It is part of the ongoing Oracle Database improvement effort and transparently reclaims the free space with negligible to no impact on performance of the database operations.

    Again, this is done in the background without affecting the running processes. It makes Oracle database 23c less dependent on the DBA activities while reducing the disk space required to store SecureFiles, reducing the usage of LOB segments.

    Automatic securefile shrink runs incrementally in small steps over time. Some of the features are tunable. And it is supported for all types of large objects, storage, compressed, encrypted, and duplicated the object segments.

    11:50

    Nikita: Right, and note that this feature is turned on out-of-the-box in the Autonomous Database 23ai in Oracle Cloud. Now, let’s talk about Automatic Storage Compression, Serge.
    Serge: With Automatic Storage Compression and Automatic Clustering, the storage compression gives you the background compression functionality. Directly loaded data is first uncompressed to speed up the actual load process. Rows are then moved into hybrid columnar compression format in the background asynchronously.

    The automatic clustering applies advanced heuristic algorithms to cluster the stored data depending on the workload and data access patterns and the data access is optimized to more efficiently make use of database table indices, zone maps, and join zone maps.

    Automatic Storage Compression advantages include the improvements to Oracle Database 23c storage efficiency as well. It is part of the continuous improvement, part of the ongoing Oracle Database improvement effort. And it brings performance gains, speeds up uncompressed data loads while compressing in the background.

    The latencies to load and compress data are because of that also reduced. With the hybrid columnar compression in particular, this works in combination.

    And it results in less DBA activities, makes the Database Management less dependent on the DBA time and availability and effort.

    Automatic Storage Compression performs operations asynchronously on the data that has already been loaded. To control Automatic Storage Compression on-premises, it must be enabled explicitly. And you have to have heatmap enabled on your Oracle Database objects.

    Table must use hybrid columnar compression and be placed on the tablespace with the SEGMENT SPACE MANAGEMENT AUTO and allowing autoallocation. And this feature, again, is transparent for the Autonomous Database 23c in the Oracle Cloud.

    14:21

    Lois: Thanks for that quick rundown of the new features, Serge. We really appreciate you for taking us through them. To learn more about what we discussed today, visit mylearn.oracle.com and search for the Oracle Database 23ai New Features for Administrators course. Join us next week for a discussion on some more Oracle Database 23ai new features. Until then, this is Lois Houston…

    Nikita: And Nikita Abraham signing off!

    14:50
    That’s all for this episode of the Oracle University Podcast. If you enjoyed listening, please click Subscribe to get all the latest episodes. We’d also love it if you would take a moment to rate and review us on your podcast app. See you again on the next episode of the Oracle University Podcast.

  • Join hosts Lois Houston and Nikita Abraham in Part 2 of the discussion on database sharding with Ron Soltani, a Senior Principal Database & Security Instructor. They talk about sharding native replication, directory-based sharding, and coordinated backup and restore for sharded databases, explaining how these features work and their benefits. Additionally, they explore the automatic bulk data move on sharding keys and the ability to split and move partition sets, highlighting the flexibility and efficiency they bring to data management. Oracle MyLearn: https://mylearn.oracle.com/ou/course/oracle-database-23ai-new-features-for-administrators/137192/207062 Oracle University Learning Community: https://education.oracle.com/ou-community LinkedIn: https://www.linkedin.com/showcase/oracle-university/ X: https://twitter.com/Oracle_Edu Special thanks to Arijit Ghosh, David Wright, and the OU Studio Team for helping us create this episode. -------------------------------------------------------- Episode Transcript:

    00:00

    Welcome to the Oracle University Podcast, the first stop on your cloud journey. During this series of informative podcasts, we’ll bring you foundational training on the most popular Oracle technologies. Let’s get started!

    00:26

    Lois: Hello and welcome to the Oracle University Podcast. I’m Lois Houston, Director of Innovation Programs with Oracle University, and with me is Nikita Abraham, Principal Technical Editor.

    Nikita: Hi everyone! In our last episode, we dove into database sharding and Oracle Database Sharding in particular. If you haven’t listened to it yet, I’d suggest you go back and do so before you listen to this episode because it will give you a lot of context.

    00:53

    Lois: Right, Niki. Today, we will discuss all the 23ai new features related to database sharding. We will cover sharding native replication, directory-based sharding, coordinated backup and restore for sharded databases, and a few more.

    Nikita: And we’re so happy to have Ron Soltani back on the podcast. If you don’t already know him, Ron is a Senior Principal Database & Security Instructor with Oracle University. Hi Ron! Let’s talk about sharding native replication, which is RAFT-based, meaning that it is reliable and fault tolerant-based, usually providing subzero or subsecond zero data loss replication support. Tell us more about it, please.

    01:33

    Ron: This is completely transparent replication built in within Oracle sharding that duplicates data across the different shards. So data are generally put into chunks. And then the chunks are replicated either between three or five different shards, depending on how much of the fault tolerance is required.
    This is completely provided by the Oracle sharding database, and does not require use of any other component like GoldenGate and Data Guard. So if you remember when we talked about the architecture, we said that each shard, each database can have a Data Guard component, whether through GoldenGate or whether through Data Guard to have a standby.
    And that way support high availability with the sharding native replication, you don't rely on the secondary database. You actually-- the shards will back each other up by holding replicas and being able to globally manage the replica, make sure everything is preserved, and manage all of the fault operations.
    Now this is a logical replication, generally consensus-based, kind of like different components all aware of each other. They know which component is good, depending on the load, depending on the failure. The sharded databases behind the scene decide who is actually serving the data to the client. That can provide subsecond failovers with zero data loss.

    03:15

    Lois: And what are the benefits of this?

    Ron: Major benefits for having sharding native replication is that it is completely transparent to the application or any of the structures. You just identify that you want to go ahead and use this replication and identify the replication factor. The rest is managed by the Oracle sharded database behind the scene.
    It supports fast failover with zero data loss, usually subsecond failovers. And depending on the number of replicas, it can even tolerate multiple failures like two server failures.

    And when the loads are submitted, the loads are also load-balanced across all of these shards based on where the data is located, based on the replicas. So this way, it can also provide you with a little bit of a better utilization of the hardware and load administration.

    So generally, it's designed to help you keep your regular SQL-based databases without having to resolve to FauxSQL or NoSQL environment getting into other databases.

    04:33

    Nikita: So next is directory-based sharding. Can you tell us what directory-based sharding is, Ron?

    Ron: Directory-based sharding basically allows the user to define the values that are used and combined for different partition, so better control, location of the data, in what partition, what shard. So this allows you to set up a good configuration.

    Now, many times we may have a key that may not be large enough for hash partitioning to distribute the data enough. Sometimes we may not even know what keys are going to come in the future. And these need to be built in the future. So having to build these, you really don't want to have to go reorganize the whole data based on new hash functions, and so when data cannot be managed and distributed using hash partitioning or when we need full control over combination of where data exists.

    05:36

    Lois: Can you give us a practical example of how this works?

    Ron: So let's say our company is very small in three different countries. So I can combine those three countries into one single shard. And then have three other big countries, each one sitting in their own individual shards. So all of this done through this directory-based sharding. However, what is good about this is the directory is created, which is a table, created behind the scene, stored in the catalog, available to the client that is cached with them, used for connection mapping, used for data access. So it can give you a lot of very high-level benefits.

    06:24

    Nikita: Speaking of benefits, what are the key advantages of using directory-based sharding?

    Ron: First benefit allow you to group the data together based on the whatever values you want, depending on what location you want to put them as far as across the shards are concerned. So all of that is much better and easier controlled by us or by the designers. Now, this is when there is not enough values available. So when you're going to use hash-based partition, that would result into an uneven distribution of the data.
    Therefore, we may be able to use this directory for better distribution of the data since we understand the data structure better than just the hash function. And having a specification where you can go ahead and create future component, future partitions, depending on how large they're going to be. Maybe you're creating them with an existing shard, later put them in another shard. So capability of having all of those controls become essential for management of this specific type of data.
    If a shard value, the key value is required, for example, as we said, client getting too big or can use the key value, split it or get multiple key value. Combine them. Move data from one location to another. So all of these components maintain automatically behind the scene by us providing the changes. And then the directory sharding and then the sharded database manages all of the data structure, movement, everything behind the scene using some of the future functionalities.

    And finally, large chunk of data, all of that can then be moved from one location to another. This is part of the automatic chunk data move and whatnot, but utilized within the directory-based sharding to allow us the control of this data and how we're going to move and manage the data based on the load as the load or the size of the data changes.

    08:50

    Lois: Ron, what is the purpose of the coordinated backup and restore system in Oracle Database Sharding?

    Ron: So, basically when we talk about a coordinated backup and restore, remember in a sharded database, I have different databases. Each database is a shard. When you take a backup, each database creates its own backup.

    So to have consistent data across all of the shards for the whole schema, it is extremely important for these databases to be coordinated when the backup is taken, when the restore is being done. So you have consistency of the data maintained across all of the shards.

    09:28

    Nikita: So, how does this coordination actually happen?

    Ron: You don't submit this through our main. You submit this through the Global Management tool that is used for the sharded database. And it's the Global Management tool that is actually submit your request to each database, but maintains the consistency of when the actual backup is taken, what SCN.
    So that SCN coordination across all of the shards is then maintained for the backup so you can create a consistent backup or restore to a consistent point in time across the sharded database. So now this system was enhanced in 23C to support multiple destinations.

    So you can now send your backup to an object store. You can send it to ZDLRA. You can send it to Amazon S3. So multiple locations can now be defined where you can send these backups to. You can also use multiple recovery catalogs.
    So let's say I have data that is located on different countries and we have requirement that data for each country must stay in that country. So I need to also use a separate catalog to maintain that partition.

    So now I can use multiple catalog and define which catalog is maintaining which partition to satisfy those type of requirements or any data administration requirement when it comes to backup recovery. In addition, you can also now specify different type of encryption to be used, whether you want to have different type of encryption algorithm for each of the databases that you're backing up that is maintained. It can be identified, and then set up for each one of those components.

    So these advancements now allow you to manage this coordinated backup and restore with all of the various specific configuration that may be required based on the data organization. So the encryption, now can also be done across that, as I mentioned, for different algorithms. And you can define different components.

    Finally, there is much better error handling and response available through this global system. Since things have been synchronized, you get much better information into diagnosing any issues.

    12:15

    Want to get the inside scoop on Oracle University? Head over to the Oracle University Learning Community. Attend exclusive events. Read up on the latest news. Get first-hand access to new products. Read the OU Learning Blog. Participate in challenges. And stay up-to-date with upcoming certification opportunities. Visit www.mylearn.oracle.com to get started.

    12:41

    Nikita: Welcome back! Continuing with the updates… next up is the automatic bulk data move on sharding keys. Ron, can you explain how this works and why it's significant?

    Ron: And by the way, this doesn't have to be a bulk data. This could be just an individual row or it could be bulk data, a huge piece of data that is going to be moved.

    Now, in the past, when the shard key of an existing record was going to be updated, we basically had to remove that row from the table, so moving it to a temporary table or moving it to another location. Basically, you're deleting the row, and then change the value and reinsert the row so the row would then be inserted into the proper location.

    That causes a lot of work and requires specific code-writing and whatnot to manage those specific type of situations. And of course, if there is a lot of data, now, you're moving those bulk data in twice.

    13:45

    Lois: Yeah… you’re moving it to one location and then moving it back in. That’s a lot of double work, not to mention that it all needs to be managed manually, right? So, how has this process been improved?

    Ron: So now, basically, you can just go ahead and update the value of the partition key, and then data will then automatically move to the new location. So this gives you complete flexibility of the shard key values.

    This is also completely transparent, and again, completely managed behind the scenes. All you do is identify what is going to be changed. Then the database will maintain the actual data location and movement behind the scenes.

    14:31

    Lois: And what are some of the specific benefits of this feature?

    Ron: Basically, it allows you to now be flexible, be able to update the shard key without having to worry about, oh, which location does this value have to exist? Do I have to delete it, reinsert it? And all of those different operations.

    And this is done automatically by Oracle database, but it does require for you to enable row movement at the table level. So for tables that are expected to have partition key updates kind of without knowing when that happens, can happen, any time it happens by the clients directly or something, then we may need to enable row movement at the table level and leave it enabled. It does have tiny bit of overhead of maintaining these row locations behind the scenes when enabled, as it maintains some metadata behind the scenes.

    But for cases that, let's say I know when the shard key is going to be changed, and we can use, let's say, a written procedure or something for that when the particular shard key is going to be changed. Then when the shard key is updated, the data will then automatically move to the new location based on that shard key operation. So we don't need to move the data manually in and out or to different locations.

    16:03

    Nikita: In our final segment, I want to bring up the update on splitting and moving a partition set, or basically subpartitioning tables and then being able to move all of the data associated with that in a bulk data move to a new location. Ron, can you explain how this process works?

    Ron: This gives us a lot of flexibility for data management based on future requirements, size of the data, key changes, or key management requirements.

    So generally when we use a composite sharding, remember, this is a combination of user-defined partitioning plus the system partitioning put together. That kind of defines a little bit more control over how the shards are, where the data is distributed evenly across the shards.

    So sometimes based on this type of configuration, we may actually need to split partition and that can cause the shard key values to be now assigned to a new shard space based on the partitioning reconfiguration. So data, this needs to be automatically managed. So when you go ahead and split partition or partitionsets, then the data based on your configuration, based on your identification can automatically move to the new location automatically between those shard spaces.

    17:32

    Lois: What are some of the key advantages of this for clients?

    Ron: This provides a huge benefit to clients because it allows them flexibility of better managing their configuration, expanding both configuration servers, the structures for better management of the data and the load. Data is completely online during all of this data move. Since this is being done behind the scenes by the database, it does not impact the availability of the data for anyone who is actually using the data.

    And then, data is generally moved using transportable tablespaces in big bulk and big chunks. So it's almost like copying portions of the files. If you remember in Oracle database, we could take a backup of big files as image copy in pieces. This is kind of similar where chunks of data can then be moved and then transported if possible depending on the organization of the data itself for those particular partitions.

    18:48

    Lois: So, what does it look like in practice?

    Ron: Well, clients now can go ahead and rearrange their data structure based on the adjustments of the partitioning that already exists within the sharded database. The bulk data move then automatically triggers once the customer execute the statement to go ahead and restructure the partitioning. And then all of the client, they're still accessing data. All of the data operation are completely maintained behind the scene.

    19:28

    Nikita: Thank you for joining us today, Ron. If you want to learn more about what we discussed today, visit mylearn.oracle.com and search for the Oracle Database 23ai New Features for Administrators course. Join us next week for a discussion on some more Oracle Database 23ai new features. Until then, this is Nikita Abraham…

    Lois: And Lois Houston signing off!

    19:51
    That’s all for this episode of the Oracle University Podcast. If you enjoyed listening, please click Subscribe to get all the latest episodes. We’d also love it if you would take a moment to rate and review us on your podcast app. See you again on the next episode of the Oracle University Podcast.

  • Saknas det avsnitt?

    Klicka här för att uppdatera flödet manuellt.

  • In this two-part episode, hosts Lois Houston and Nikita Abraham are joined by Ron Soltani, a Senior Principal Database & Security Instructor, to discuss the ins and outs of database sharding. In Part 1, they delve into the fundamentals of database sharding, including what it is and how it works, specifically looking at Oracle Database Sharding and its benefits. They also explore the architecture of a sharded database, examining components such as shards, shard catalogs, and shard directors. Oracle MyLearn: https://mylearn.oracle.com/ou/course/oracle-database-23ai-new-features-for-administrators/137192/207062 Oracle University Learning Community: https://education.oracle.com/ou-community LinkedIn: https://www.linkedin.com/showcase/oracle-university/ X: https://twitter.com/Oracle_Edu Special thanks to Arijit Ghosh, David Wright, and the OU Studio Team for helping us create this episode. -------------------------------------------------------- Episode Transcript:

    00:00

    Welcome to the Oracle University Podcast, the first stop on your cloud journey. During this series of informative podcasts, we’ll bring you foundational training on the most popular Oracle technologies. Let’s get started!

    00:26

    Nikita: Hello and welcome to the Oracle University Podcast. I’m Nikita Abraham, Principal Technical Editor with Oracle University, and with me is Lois Houston, Director of Innovation Programs.

    Lois: Hi there! The last two weeks of the podcast have been dedicated to all things database security. We discussed why it’s so important and looked at all the new features related to database security that have been released in Oracle Database 23ai, previously known as 23c.

    00:55

    Nikita: Today’s episode is also going to be the first of two parts, and we’re going to explore database sharding with Ron Soltani. Ron is a Senior Principal Database & Security Instructor with Oracle University. We’ll ask Ron about what database sharding is and then talk specifically about Oracle Database Sharding. We’ll look at the benefits of it and also discuss the architecture.
    Lois: All this will help us to prepare for next week’s episode when we dive into each 23ai new feature related to Oracle Database Sharding. So, let’s get to it. Hi Ron! What’s database sharding?

    01:32

    Ron: This is basically an architecture to allow you to divide data for better computing and scaling across multiple environments instead of having a single system performing the work. So this allows you to do hyperscale computing and other different technologies that are included that will allow you to distribute your queries and all other requests across these multiple components to be able to get a very fast response.
    Now many times with this distributed segment across each kind of database that is called a shard allow you to have some geographical location component while you are not really sharing any of the servers or the components. So it allows you separation and data management for each of the shards separately. However, when it comes to the application, the sharded database is totally invisible. So as far as the application is concerned, they connect to a global service, submit their statements. Everything else is managed then by the sharded database underneath.

    With sharded tables, basically it gets distributed across each shard. Normally, this is done through horizontal partitioning. And then the data depending on the partitioning scheme will be distributed across like server A, server B, server C, which are independent servers that are running independent databases.

    03:18

    Nikita: And what about Oracle Database Sharding specifically?

    Ron: The Oracle Database Sharding allows you to automate how the data is distributed, replicated, and maintain the kind of a directory that defines the complete sharding scheme, while everything is distributed across many servers with no sharing whether the hardware or software. It allows you to have a very good scaling to be able to scale based on this partitioning across all of these independent servers.

    And based on the subset and the discrete data configuration, you can go ahead and distribute this data across these components where each shard is an independent data location or data component, a subset of data that can be used, whether individually on its own or globally across all of the shards together. And as we said to the application, the Oracle Database Sharding also looks as a single component.

    04:35

    Lois: Ron, what are some of the benefits of Oracle Database Sharding?

    Ron: With Oracle Database, you basically have linear scaling capability across as many shards as you like. And all of the different database configurations are supported with this. So you can have rack databases across the shards, Oracle Data Guard, GoldenGate. So all of the different components are still used to give you all of the high availability and every other kind of functionality that we generally used to having a single database with.

    It provides you with fault toleration. So each component could be down. It could have its own replicated data. It doesn't affect other location and availability of the data in those other locations.

    And finally, depending on data sovereignty and configuration, you could actually distribute data geographically across the different locations based on requirements and also data access to provide a higher speed for local data management.

    05:46

    Lois: I’d like to understand more about the architecture of Oracle Database Sharding. Ron, can you first give us a broad overview of how Oracle Database Sharding is structured?

    Ron: When it comes to dealing with Oracle Database architecture, the components include, first, your shards. The shards-- each one is an independent Oracle Database depending on the partitioning you decide on a partition key and then how the actual data is divided across those shards.

    06:18

    Nikita: So, these shards are like separate pieces of the database puzzle…Ok. What’s next in the architecture?

    Ron: Then you have shard catalog. Shard catalog is a catalog of your sharding configuration, is aware of all of the components in the shard, and any kind of replicated object that master object exists in the shard catalog to be maintained from there.
    And it also manages the global queries acting as a proxy. So queries can be distributed across multiple shards. The data from the shards returned back to the catalog to group together and then sent back to the client.

    Now, this shard catalog is basically another version of an Oracle Database that is created independently of the shards that include the actual data, and its job is to maintain this catalog functionality.

    07:19

    Nikita: Got it. And what about the shard director?
    Ron: The shard director is like another form of a global service manager.

    So it understands the sharding by being able to access the catalog, knows where everything exists. The client connection pool will hit the shard director. In general, communication and then whether it's being distributed to the shard catalog to be able to proxy it, or, if the key is available, then the director can send the query directly to the shard based on the key where the data exists. So the shard can then respond to the client directly. So all of the connection pool and the components for global administration, generally managed by the shard director.

    08:11

    Nikita: Can we dive into each of these components in a little more detail? Let’s go backwards and start with the shard director.

    Ron: The shard director, as we said, this is like a global service manager. It acts as a regional listener where all of the connection requests will be coming to the shard director and then distributed from that depending on the type of connection that is being used.

    Now the director understands the topology--maintains the complete understanding of the mapping of the data against the shards. And based on the shard key, if the request are specified on the specific key, it can then route the connection request directly to the shard that is appropriate where the data resides for the direct response.

    09:03

    Lois: And what can you tell us about the shard catalog?

    Ron: The shard catalog, this is another Oracle Database that is created for special purpose of holding the topology of the sharded database. And have all of the centralized information metadata about your sharded database. It also act as a proxy.
    So, if a client request comes in without providing a shard key, then the request would go to the catalog. It can be distributed to all of the shards. So the shards that you actually have the data can respond, but the data can then be combined and sent back to the client. So, it also creates the master copy of all the duplicate tables that are created in the shard database.

    09:56

    Lois: Ok. I’ve got it. Now, let’s talk more about the shards themselves.

    Ron: Each shard is basically a database. And data is horizontally partitioned to be placed on each of these shards. So, this physical database is called the shard. And depending on the topology of your sharding, there could be user sharding, for example, where multiple keys are in a single shard or could be a system sharding that based on the hash value data is distributed whether singly or multiple data components across each shard.
    Now, this is completely transparent to the application. So, as far as application is concerned, this is a single database and the response everything that they do is generally just operating as a single database interaction.

    However, when it comes to the administrators, each shard is a separate database. Each shard can be managed independently and can have its own standby and other components that is then set up for high availability and management of the data operations.

    11:21

    Do you have an idea for a new course or learning opportunity? We’d love to hear it! Visit the Oracle University Learning Community and share your thoughts with us on the Idea Incubator.

    Your suggestion could find a place in future development projects! Visit mylearn.oracle.com to
    get started.

    11:41

    Nikita: Welcome back! Let’s move on to global services and the various sharding methods. Ron, can you explain what global services are and how they function in a sharded database?
    Ron: Global services is generally the service that is used for the application to be able to connect to the sharded database. This is provided and supported through the shard director. So clients are routed using this global service.

    12:11

    Lois: What are the different sharding methods that are available?

    Ron: When it comes to sharding methods that were available, originally we started with the system sharding, which is a hash partition, basically data is distributed evenly across the shards. Then we needed to allow for the user-defined sharding because sometimes it's not about just distributing the data evenly, it's also about controlling where the data goes to be able to control individual query execution based on the keys. And even for data sovereignty and position of the data itself.

    And then a composite sharding, which provides you kind of a combination of the user-defined sharding and the system hash sharding that gives you a little bit of a combination of the two to better distribute your data across the shard.

    And finally, sub-partitioning all types of sub-partitionings are supported to provide a better structure of the data depending on the application schema design.

    13:16

    Nikita: Ron, how do clients typically connect to a sharded database?

    Ron: When it comes to the client connections, all the client connections are generally routed to the director and then managed from there. So there are multiple ways that clients can connect. One could be a direct connect. With a direct connect, they're providing the shard key in the request. Therefore, the director knowing the topology can route the client directly to the shard that has the data.

    The proxy routing is done by the catalog. This is when generally a shard key is not provided or data is requested from many shards. So data will then request is then sent to the catalog. The catalog database will then distribute the query to the shards, collects the results, and then combine sending it back to the client acting as a proxy sitting in the middle.

    And the middle tier routing, this is when you can expose the middle tier to the structure of your sharding. So when the middle tier send the request, the request identifies which shard the data is going to. So take advantage of that from the middle tier. So the data is then routed properly. But that requires exposing the structures and everything in the middle tier.

    14:40

    Lois: Let’s dive a bit deeper into direct routing. What are the advantages of using this method?

    Ron: With the client request routing, as we talked about the direct routing, this allows the applications to get very quick data access when they know the key that is used for the distribution of the data. And that is used to access the data from the shard.
    This provides you a direct connection to a shard from the shard director. And once the connection is established, then the queries can get data directly across the shard with the key that is supplied. So the RAC respond for that particular subset of data with the data request.

    Now with the direct routing again, you get some advantages. The advantage is you have much better performance for capturing subset of the data because you don't have to wait for every shard to respond for a particular query.

    If you want to distribute data geographically or based on the specific key, of course, all of that is perfectly supported. And kind of allows you to now distribute your query to actually the location where the data exists.

    So for example, data that is in Canada can then be locally accessed in Canada through this direct access. And of course, when it comes to management of your client connection, load balancing of those connections. And of course, supporting all types of queries and application requests.

    16:18

    Nikita: And what about routing by proxy?

    Ron: The proxy routing is when queries do not supply the actual sharding key, where identifies which shard the data reside. Or the actual routing cannot be properly identified. Then the shard director will send the request to the catalog performing the work as the proxy.

    So proxy will then send a request to all of the shards. If any shards can be eliminated, would be. But generally all of the shards that could have any portion of the data will then get the request. The requests are then sent back to the proxy. And then the proxy will then coordinate the data going back and forth between the client.

    And the shard catalog basically hands this type of data access to the catalog to act as the proxy. And then the catalog is-- the shard director is no longer part of the connection management since everything is then handled by the shard catalog itself.

    17:37

    Lois: Can you explain middle tier routing, Ron?

    Ron: This generally allows you to use the middle tier to define which shard your data is being routed into. This is a type of routing that can be used where the data geographically have some sovereignty or the application is aware of the structure.
    So the middle tier is exposed to the sharded database topology. So understand exactly what these components are based on the specific request on the shard key, then the middle tier can then route the application to the appropriate location for the connection.

    And then the middle tier, and then the either one shard or the subset of shard will maintain those connections for the data access going back and forth since the topology is now being managed by the middle tier. Of course, all of the work that is done here still is known in the catalog, will be registered in the catalog. So catalog is fully aware of any operations that are going on, whether connection is done through middle tier or through direct routing.

    18:54

    Nikita: Ron, can you tell us how query execution and DDL operations work in a sharded database?

    Ron: When it comes to the query execution of the application, there are no changes, no requirement for identifying specifically how the data is distributed. All of that is maintained behind the scene based on your sharding topology.

    For the DDL, most of your tables, most of the structures work exactly the same way as it did before. There are some general structures that are associated to the sharded database that we will originally create and set up with mapping. Once the mappings are configured, then the rest of the components are created just like a regular database.

    19:43

    Lois: Ok. What about the deployment process? Is it complicated to set up a sharded database?

    Ron: The deployment for the sharded database is fully automated using Terraform, Kubernetes, and scripts that are put together. Basically what you do is you provide some of your configuration information, structure of your topology through an input file, like a parameter file type of a thing.

    And then you execute the scripts and then it will build everything else based on the structure that you have provided.

    20:19

    Nikita: What if someone wants to migrate from a non-sharded database to a sharded database? Is there support for that?

    Ron: If you are going to migrate from a regular database to a sharded database, there are two components that are fully shard aware.

    First, you have the Shard Advisor. This can look at your current structure, the schema, how the data is distributed. And the workload and how the data is used to give you recommendation in what type of sharding would work best based on the workload.

    And then Data Pump is fully aware of the sharding component. Normally, we use Data Pump and load into each of the databases individually on its own. So instead of one job having to read all the data and move data across many shards, data can be loaded individually across each shard using Data Pump for much faster operations.

    21:18

    Lois: Ron, thank you for joining us today. Now that we’ve had a good understanding of Oracle Database Sharding, we’ll talk about the new 23ai features related to this topic next week.

    Nikita: And if you want to learn more about what we discussed today, visit mylearn.oracle.com and search for the Oracle Database 23ai New Features for Administrators course. Until next week, this is Nikita Abraham…

    Lois: And Lois Houston signing off!

    21:45

    That’s all for this episode of the Oracle University Podcast. If you enjoyed listening, please click Subscribe to get all the latest episodes. We’d also love it if you would take a moment to rate and review us on your podcast app. See you again on the next episode of the Oracle University Podcast.

  • In this episode, hosts Lois Houston and Nikita Abraham continue their exploration of Oracle Database 23ai's database security capabilities. They are joined once again by Ron Soltani, a Senior Principal Database & Security Instructor, who delves into the intricacies of the new hybrid read-only mode for pluggable databases, the flexibility of read-only users and sessions, and the newly introduced developer role. They also discuss simplified schema-level privileges and the integration of Azure Active Directory with Oracle Database. Oracle MyLearn: https://mylearn.oracle.com/ou/course/oracle-database-23ai-new-features-for-administrators/137192/207062 Oracle University Learning Community: https://education.oracle.com/ou-community LinkedIn: https://www.linkedin.com/showcase/oracle-university/ X: https://twitter.com/Oracle_Edu Special thanks to Arijit Ghosh, David Wright, and the OU Studio Team for helping us create this episode. --------------------------------------------------------- Episode Transcript:

    00:00

    Welcome to the Oracle University Podcast, the first stop on your cloud journey. During this series of informative podcasts, we’ll bring you foundational training on the most popular Oracle technologies. Let’s get started!

    00:26

    Lois: Hello and welcome to the Oracle University Podcast. I’m Lois Houston, Director of Innovation Programs with Oracle University, and with me today is Nikita Abraham, Principal Technical Editor.

    Nikita: Hi everyone! In our last episode, we discussed database security, why it is so important, and all its different components. Today, we’re going to be continuing that conversation by looking at all the new features related to database security that have been released in Oracle Database 23ai, previously known as 23c.

    00:59

    Lois: And we’re so happy to have Ron Soltani back as our guide. Ron is a Senior Principal Database & Security Instructor with Oracle University. Hi Ron! Thanks for joining us again! We have a list of the new features related to database security and we’d like to ask you about them one by one, starting with the new mode for pluggable databases. What’s that about?

    01:21

    Ron: With the hybrid read-only mode for pluggable database, the database could be in the read/write mode or read-only mode, depending on the user that is actually connected. So one of the things we have to realize is the regular read-only mode has one major issue. The major issue is everything, including data dictionary, including SysAux and all of the other elements are also locked up read-only.

    So we cannot do any database maintenance. We cannot collect statistics to monitor anything. So you pretty much have to hard tune everything for the load you want and maintain everything. And this happens in many warehouse environments, in environments where the data itself is generally loaded. And then just heavily read. So it requires to be in a read-only mode to protect it.

    So with a hybrid read-only mode, if you are a local user in the PDB, even a PDB administrator-- so I can create a local user in the PDB as a PDB administrator. And grant that PDB administrator even sysdba privilege. But once the PDB is open hybrid read-only mode, even for that user, the PDB is read-only. However, if a common user connect, who is, as you know, is a CDB user. Generally, CDB-level privileges granted and considered CDB administrators.

    If they connect to the PDB, then the PDB is actually in read/write mode. So now, they can take snapshots. They can use all of the database tools to monitor how things are going. They can perform maintenance. So this allows us to be able to perform patching, maintenance, and other database-related operation.

    03:17

    Nikita: So you don’t have to flip back and forth between read-only, read/write, read-only, read/write…

    Ron: Because you know if we have database read/write to go to read-only, generally, we would have to shut down the database, then go to read-only. Then from read-only, we can go to read/write. But then going back to read-only, we have to shut down again.

    Lois: Which was the issue with the normal read-only on the pluggable database, right? I’m glad that’s been made easier. Ok… Moving on to the next new feature, which is read-only users and sessions. What can you tell us about this one, Ron?

    03:51

    Ron: As we previously discussed, you can put the PDB in the hybrid read-only mode. But then now the PDB is read-only for all local application users. However, let's say we have an environment where you have multiple application users.
    One needs to be able to perform maintenance and perform updates where other sessions who are just reading the data to protect against all security element, and then better performance and operation management. We are going to set up read-only.

    So setting up read-only at the pluggable database, that can be very high level depending on the application need. So with the read-only users and session, this will give you capability of setting read-only either for a particular user.

    So when the user connects, all the user can do is read-only process. We do a lot of testing, for example. And we have users that may have read/write privilege in the test environment, then we want to go ahead and perform other operation.

    So we would have to take privileges away, set the read-only, then go back and change again to read/write. So performing all of those different type of tests and even with the development has always been an issue.

    So having granular capability of managing at a user or a session level can give us a major benefit of better granularly managing all application needs without sacrificing either security or having extra components that would have to be done by administrators.

    05:33

    Nikita: Yeah, this gives you a lot of flexibility and you don’t have to keep temporarily changing privileges or configuring specific types of sessions. It’s also an easy way to control user behavior, right?

    Ron: An application, as we said, have the schema owner that today we want to have a schema-only user for the schema owner. That is usually nobody connect us.

    But then we have multiple schema users that one may be used for performing updates, one is used for administration, and one can be used for read-only. So this can give me a mechanism to manage that, or if a particular operation needs to run and for security purposes, that particular session needs to be set to read-only. So that gives us major control over it.

    And in the cloud environment, this can be a very, very good component for better managing all of the security levels, where you can enable very fine-grained control while supporting all functionality of the application.

    06:39

    Lois: Ok. So, can you tell us about this new developer role in the database?

    Ron: If we think about application administration, usually we create a schema owner. And we start by giving that the schema owner privileges-- grant them a resource role. By having resource role, they can create simple objects. But when you design an application, you need to implement it, test that, and then deploy it.

    Today, there are many, many complex objects that can be used at the application level to manage the application. So today, we grant the resource role to the schema owner. Then we wait until they complain. They don't have privilege for certain object they want to create. Then we're going to have to grant them privileges as needed, and that used to be the way the security had worked.

    But today since we have a schema only account where we can only enable the account when we want to do any type of schema work, and then it's locked up so the schema is protected, giving the schema owner the application role, the DB application role, now that has all the privileges in it, should not cause any security issue when managed properly, and will provide them with all of the privileges that they need to perform their work, including there are many complex schema structure like analytical views, hierarchies, dimensions, data-specific types that you can create.

    And many of these type of privileges are not just assigned through a regular privilege assignment. Some of them are assigned through procedures.

    08:21

    Lois: And could you give us some examples of how this feature could be used?

    Ron: So there are many different ways of granting all of these granule privileges. So at the time that we go ahead and perform development of the schema and all of that depending on what's available, we don't know really what privileges do we need.
    And as we said, there are many packages that we may be able to use to create complex objects that then gradually have to go ahead and get privileges on executing those packages and to be able to use them. And as we said at the time we actually performed the application, many of these objects, we may not even know we're going to use them until later on becomes evident or it may be a better structure to represent what we want.

    So having to add and continuously deal with these type of changes can become extremely kind of cumbersome and tedious. It also delays all of the operations, especially now that the application schema owner can be secured. So we can grant this developer role to the schema owner, give the schema owner all privileges that is needed very quickly that they can now manage their schemas and manage all complex objects for that schema operation.

    So the role is called db developer role. And just like any other role, you would connect as an administrator, grant db developer role to the schema owner. Now, we don't need to grant the resource role and all other things, because everything here is included in the db developer role.

    10:01

    The Oracle University Learning Community is an excellent place to collaborate and learn with Oracle experts and fellow learners. Grow your skills, inspire innovation, and celebrate your
    successes. All your activities, from liking a post to answering questions and sharing with others, will help you earn a valuable reputation, badges, and ranks to be recognized in the community. Visit www.mylearn.oracle.com to get started.

    10:28

    Nikita: Welcome back! Ron, how have schema-level privileges been simplified in 23ai?

    Ron: To be able to understand this, first we can review the privilege assignment in Oracle Database. First, you can be granted a privilege at an object level, so you can perform certain work on a particular object.

    However, let's say I have a user account that I'm going to use an app user who's going to have to read from multiple objects within a particular schema. Now this granting at the object level is too low because I have to go at each object and assign the privileges needed on that particular object to the user.

    Or we had our system privilege, for example, grant create any table to a user. The problem with that is now you can create any table within the schema that I want you to work with. But that privilege goes across all the schemas in the database, of course, not the database schemas itself-- those are protected, but across all user schemas.

    11:34

    Lois: Right. So, you're getting that privilege on other schemas that you may not really need that privilege for...

    Ron: So now the gap is kind of met with creating a schema-level privilege that allows you to grant the same any privilege but on all objects of a particular schema and not granted across all the schemas.

    So this now allows us to much better be able to manage schemas, have schema user accounts with different level privileges on all the objects that they need to perform the type of work that they need to, without having to granularly assign each one of those privileges as we used to create many different roles with different privileges needed, then try to control the users by granting them those roles. Here, these are much better simplified by going through the schema-level privilege.

    12:34

    Nikita: Ron, I want to ask you about the new feature on creating audit policies at the column level.

    Ron: So if you remember, in the past, we talked about we can create audit policies with the old system where you would identify what to audit. But then you had to manage a whole bunch of parameters and security. And protecting audit even from the administrator were major issues.

    In 12, Oracle identified or added the unified audit, which gives you protection on the audit schema. Even administrators cannot access it. You manage it through privileges that are assigned specifically to users who are going to manage the audit.
    And it also allow you to audit Oracle operations, tools like Data Pump, like RMAN. So you can create a really secure audit environment monitoring everything in the database using unified audit and then maintain and manage those audits.

    One of the important aspect of auditing is generating the minimal amount of audits. So this way, audits can be reviewed because if you generate too much audit, it is very hard to automate either using an automated system to review the audits or having users to review those audits.

    Furthermore, if we wanted to then audit specific columns and different operation like SELECT, DML, we would have had to use the row-level security and build additional policies to be able to then individually monitor those columns, which not very simple to use and manage. And then the audits are put in different tables. Having to maintain all of those, relate them has always caused major issue overall.

    So the benefit of having now this column-level audit added to the normal unified audit policies is that you can go ahead and build now your audits instead of at the table level, only for a particular column. This is going to reduce the false positive results that are generated because if I'm going to put update on a table, not updating any column can generate an audit.

    But if I put update on the column salary, then only if the salary is updated, the audit is generated. So that can give me just the audits that are needed without the additional false positive audits that are generally generated.

    15:08

    Lois: Ron, can you talk to us about the management of authorization for Unified Audit administration, especially when using Database Vault?

    Ron: So first as we know for the Unified Audit, you have audit admin privilege and audit viewer privilege.

    If you want to be able to create and administer and manage all of the audit information, including the audit purging and time periods and all of that, you have to have audit admin privilege. If you want to be able to read and generate the reports or things like that from the audits that have been created, you have to have audit viewer privilege.

    Now we also have Oracle Database Vault. Database Vault kind of uses a row level security, but not on the end user data. It applies this row level security and administration on Oracle data dictionary. And allows you to control when particular object can be used, at what level can they be used?

    And give you complete control over how the actual database and the objects are used and become available to other users in the database, including other administrators, even schema owners. So when the Database Vault is then applied and enabled, in the past, we could have managed the Unified Audit, which was kind of very funky to put one of the major security functions outside the main security Administration utility of the database.

    So now, the Unified Audit has been incorporated into the Database Vault. So you can now use Database Vault to go ahead and set up the privileges and configuration for the authorizations required for managing Unified Audit. This also controls all the high-level users, including SYS, SYSTEM, and anyone who may have DBA roles or other high-level privileges.
    So this allows us to now enable the Database Vault, and then manage the authorizations for the Unified Audit through Database Vault. Therefore, all authorization administration is unified under the same security tool, which is Database Vault.

    17:28

    Nikita: The final new feature to discuss is the integration of Microsoft Azure Active Directory with the Oracle database environment. What can you tell us about it, Ron?

    Ron: This has been requested by many of the clients who use other platforms and active directories and then need to access either the Oracle OCI, Oracle Cloud where the databases are running or having Oracle databases even in a local environment.
    So wanted to be able to now allow this to happen. So if you remember, originally we had capability of mapping users from the database into Oracle Active Directory. So this way the user's role privileges can be centrally managed and the user does not inherit any privileges in the database. So if the user directly connect to database, has no privileges. Connect properly through Active Directory, everything enabled.

    Then in Database 18, they created the commonly managed users, the CMUs. Where we could now map a third party Active Directory and then be able to use that into connecting to Oracle database for authentication and user administration. However, many of our clients use Microsoft Azure Active Directory. And they wanted to be able to integrate that particular Active Directory into Oracle environment, especially in the Oracle OCI Database as a service environment.

    So to be able to do that, Oracle has multiple components that they have built to allow this to be able to now be configured and used. So the client can use these Active Directory for their user administration centrally.

    19:20

    Lois: With that, I think we’ve covered all the new features related to database security in 23ai. Thanks so much for taking us through all of them and giving us some context.

    Nikita: Yeah, it’s really been so helpful. To learn more about these new features and watch some demonstrations on them, visit mylearn.oracle.com and search for the Oracle Database 23ai New Features for Administrators course. Join us next week for a discussion on some more Oracle Database 23ai new features. Until then, this is Nikita Abraham…

    Lois: And Lois Houston signing off!

    19:54

    That’s all for this episode of the Oracle University Podcast. If you enjoyed listening, please click Subscribe to get all the latest episodes. We’d also love it if you would take a moment to rate and review us on your podcast app. See you again on the next episode of the Oracle University Podcast.

  • Join hosts Lois Houston and Nikita Abraham, along with Senior Principal Database & Security Instructor Ron Soltani, as they dive into the critical topic of database security. In the first of a two-part series on database security in Oracle Database 23ai, they discuss the importance of protecting data against external and internal threats, common security risks like phishing and SQL injection, and the principle of least privilege. Oracle MyLearn: https://mylearn.oracle.com/ou/course/oracle-database-23ai-new-features-for-administrators/137192/207062 Oracle University Learning Community: https://education.oracle.com/ou-community LinkedIn: https://www.linkedin.com/showcase/oracle-university/ X: https://twitter.com/Oracle_Edu Special thanks to Arijit Ghosh, David Wright, and the OU Studio Team for helping us create this episode. -------------------------------------------------------- Episode Transcript:

    00:00

    Welcome to the Oracle University Podcast, the first stop on your cloud journey. During this series of informative podcasts, we’ll bring you foundational training on the most popular Oracle technologies. Let’s get started!

    00:26

    Nikita: Hello and welcome to the Oracle University Podcast. I’m Nikita Abraham, Principal Technical Editor with Oracle University, and joining me is Lois Houston, Director of Innovation Programs.

    Lois: Hi there! In case you missed last week’s episode, we’ve begun a new season of the podcast, talking about all the new features in Oracle Database 23ai. We covered blockchain tables and new features, and today’s episode is going to be one of two that will be dedicated to database security.

    Nikita: Right, Lois. So, in Part 1, we want to set the scene, so to speak, by looking at an overview of database security so that when we discuss some of the new features, we’ll know exactly where they actually fit into the process. Joining us for these two episodes is Ron Soltani. Ron is a Senior Principal Database & Security Instructor with Oracle University.

    01:16

    Lois: Hi Ron! Thanks for being with us today. To start off, let's discuss the importance of database security. Why is database security so critical today?

    Ron: Security requirements, describes the need for keeping things private and make sure that we protect against threat, against data destruction. We also have, today, data that is global. Therefore, there is consolidation of the data. There is globalization.

    There is data sourcing, locational, where the data is actually located, rules opposed by different governments, and guidelines that enforce a certain type of security administration on the data. And finally, there are many different companies or organizations that actually come up with either guidelines or rules that must be followed for security aspect that we must set up and build compliance.

    02:24

    Nikita: Ron, what are some of the common security risks that databases face?

    Ron: Security risk can include external threats that could be unauthorized users trying to use phishing, get privileged user information, and get in as a privileged user to do whatever damage they want. Denial-of-service attack, one of the most common attacks out there where the attackers just create or attack the components, like a listener, for example, in a database, and cause a situation where the listener can no longer establish connection to the database. So now no client can connect to the database to get data, which is that denial-of-service attacks.

    Having unauthorized access to the data-- so again, this is generally done through phishing or sometimes even SQL injection. SQL injection also allows you to insert SQL statement in the application where it's not expected, where it can then convert into an executable in the database and then have unwanted data returned for the user.

    03:42

    Nikita: Sorry, can you explain that?

    Ron: For example, when you go to Google, you want to run a search. They expect you to say, meaning of a particular word.
    Now, what if I knew the structure of the data organization in Google? And instead of just putting in meaning of whatever word, I actually plug in a SQL statement that then passed along to the Google system to be executed. And then that SQL, if the components and everything exist and within the privileges of what is being executed, could expose some information to me. So that's the idea with being able to perform that type of operation.

    04:24

    Lois: Ok. So, those are external threats. But, could you also have internal threats?

    Ron: Internal threat could be abused by someone who is privileged, could be sabotage of the system and the data. It could be data complexity that creates an environment where data is not properly being secured and even accidental damage. It's a security issue.

    And then finally, if there is a damage, we do need to be able to perform recovery. So we create backups and data access in those. Therefore, those recovery information must be properly secured. And finally, the omission, being able to block access or cause issues with the data. Then having external threats coming in through the internal abuse, so internal abuse could actually open door to allow external threats to get in.

    Now, the final type of security risk could be coming in from partners who have privilege to be able to load or access and get data. For example, I may sell a particular product. But the product description is actually coming from the product distributor.

    05:47

    Nikita: Yeah, so they have access to push that product information into your system. So, what are the typical points of attack for a database? I’m familiar with phishing.

    Ron: People send you emails or do something to be able to get information from the pieces and things that come back. For example, this is one of the reason for many operation. We would return false error messages. Like in Oracle database, if you don't have privilege on the table and you try to select it, we tell you a table or a view does not exist. So this way, you don't know if it's a table, you don't know if it's a view. And as far as you know, it doesn't exist. So the name you have does not correspond to any particular data.

    06:32

    Nikita: That’s clever!

    Ron: If we would tell you don't have privilege, now you know the name of this table exists. So now I just got to find a way of hacking the table. So this is basically phishing means, extracting different pieces of information through different channels, being able to put them together.

    Then in database, we have some privilege known accounts that if not protected can be a vulnerable access. The back doors into the database. For example, somebody being able to get to the operating system DBA group, and then connect to the database without user ID and password.

    That's why we have to protect every layer. Any debug codes that may be available that could reference how the operation of the system is actually going. Creating cross-scripting between the different data and then operations that goes on. And as we talked about, SQL injection.

    07:28

    Lois: Can you dive a little deeper into SQL injection, Ron?

    Ron: With SQL injection, you kind of have to understand that, in general, SQL injection means somebody, like we said, knows the structure of something, knows the structure of the way the application is operating, and then be able to inject a SQL statement where they would generally put a condition or pass some parameters or some information to the application.
    So, then that SQL statement becomes part of that statement and submitted to the database. Now, we need to understand SQL injection is not about the person, is not about generally your overall configuration of the database.

    The most important aspect of SQL injection is about the session that is actually doing the work. For example, if I am a DBA and I am going to collect statistics for a table. If I connect a SYS DBA to collect that statistics and somebody hacks into my session and inject a SQL drop database. Database gone, because the session has SYS DBA privilege.

    But if I have a user that only has create session privilege and execute a script. And in this script, I write the statement to collect statistics and I give that script only the privilege to collect stats. So now I can connect as that user with that minimal privilege, just execute the script.

    So now anyone inject any SQL into the session, that will never be executed because the session has no privilege. So this is the important of SQL injection for us to understand that the importance is what happens at the session level.

    And many of the security element we will see, like read-only session, hybrid read-only PDB and things like that are related into this type of SQL injection or abuse.

    09:30

    Lois: Yeah, we are looking forward to talking through those new features in the next episode.

    Ron: So the common vulnerabilities can be exploited, and also any of the users that are part of the operations that can be set up into the string and supplied into middle of statements and things like that.

    09:56

    Did you know that Oracle University offers free courses on Oracle Cloud Infrastructure? You’ll find training on everything from cloud computing, database, and security, artificial intelligence and machine learning, all free to subscribers. So, what are you waiting for? Pick a topic, leverage the Oracle University Learning Community to ask questions, and then sit for your certification. Visit www.mylearn.oracle.com to get started.

    10:25

    Nikita: Welcome back! One of the concepts I wanted to ask you about, Ron, is the principle of least privilege. Can you explain what it really means?

    Ron: Principles of least privilege, again, means the work that needs to be done has to have minimal privilege.

    10:40

    Nikita: But, we’ve always thought about that, right? Giving a user minimal privileges…

    Ron: Well, back in the old days, we used to execute everything as a schema owner.

    Therefore, we had privilege on all the data. Then we said, OK, let's create schema users and only give them like a read privilege or this privilege. So they can only do the type of work they need to do, which is fine.

    But at the same time, that can be very complex. Now I need a lot of different users and whatnot. So when it comes to principles of least privilege, this is generally about only installing whatever software that is required. Only enable or turn on whatever machines and segments that is going to be used.

    Have proper operating system level users and privileges configured for all of the software that is installed at the operating system level. Have proper administrator account that are properly maintained. Set up privilege user account for each operation.

    So when we do maintenance and database administration, we are not creating very high-level privileged session. That's why some of the differences privileges was created in database 12 and up, like Sys backup, Sys DG, Sys RAC. So you don't inherit the privilege as Sys DBA to actually do the work. You only have privileges for what you need.

    And, of course, limit the user's access to particular object and things that they need to do. However, as I mentioned, this is not just about the user level. This is also about the session level. If I'm going to do maintenance and I'm connecting as a schema owner, somebody inject a SQL drop table, table gone. So that's why it is very important for us to be able to have control over how sessions can also operate within the database.

    12:34

    Lois: Right, so, what about the strategy of defense in depth?

    Ron: Defense in depth. That means we have to strengthen and apply security at every level, whether it being at the securities applied at the operating system in the database, in the application, in the network.

    So we have to have policies defining all the different security levels. Most important, train users. So no mistakable damages. Harden every component, including the operating system. Set up proper firewalls.

    Set up proper network security, like use of the Oracle firewall that protects against unwanted SQL statements. We can compare SQL statement to a whitelist of acceptable statements. And then other database security features like VPD, the auditing as we will talk about, and other components to give you an overall very secure environment.

    13:35

    Nikita: Ron, what are the fundamental aspects of managing security only within the database… not including the operating system or the application?

    Ron: So first, we have to have confidentiality. Confidentiality means that we need to make sure that all of the data is properly secured at a data level, whether it be both at the storage level, in the database for data usage, and we have many different ways of doing confidentiality management.

    Number one, properly creating users, maintaining users with the proper password through proper authentication. And then setting up authorization that privileges may not be enough because if I give you select privilege, you can see every column, every row.

    So I may need row level security, data redaction, data masking for duplication, and other mechanism to help us manage even subset of data for that particular security.

    14:35

    Lois: Ok… so that’s confidentiality. What’s next?

    Ron: Data integrity means that we need to make sure that data is not destroyed, whether it being addressed in the database, in memory, in data file, in backup, in exports, or during transmission in the network.

    So we usually apply encryption and check-summing not only to protect the data, but also to validate, make sure it's not corrupted.

    Next data availability, which means today, especially, we are 24/7 operation. And remember we talked about denial of service attack on a database. That usually attacks on the listener, because if the listener is crashed, nobody can connect.
    We have to then utilize available tools and components like RAC to have multiple instances in case a particular host crashes. And I lose a particular instance. Data Guard in case my storage and a whole database crashes. The PDB and real-time PDB management with duplication, having a PDB standbys that are maintained and managed behind the scene. Using PDB snapshots, which are point in time. Preserve data that I can use it for restoring data at those particular point in time.
    Backup recovery through RMAN or other backup recovery processes. So in case data is damaged, I can restore it and recover it. And finally, auditing. Auditing historically was always known as after effect.

    16:09

    Nikita: That’s what I was wondering… You only see what’s going on after something happens, right?

    Ron: It also can be a deterrent when people know they are being audited, they're more careful, don't make mistakes. Try not to, of course, do anything you get caught. And today, this auditing can also be set up in a way that it cannot only catch what is going on. It can actually help us better secure data and have much better responses.

    Now the problem with auditing has always been the overhead. That's why the unified audits that provides us with much less overhead for management can give us an extreme detailed audits. And then the new features allows us to even more reduce the amount of audits that are generated by only auditing at the column level and better protection for those audits.
    By the way, in the older days, most auditing was done at the app, because we never knew who the end user the app is. But today with being able to have Active Directory mapped into the database and information passed between the two, all audits can actually come back centrally to the database.

    17:21

    Lois: So, to wrap up today’s conversation, Ron, can you just summarize database security for us? All the things we need to think about.

    Ron: So database security starts with making sure, number one, our network is secure and we are accessing the data through a very secure connection coming in from the user.

    If required to be, could have a three-tier environment where the clients go through a first external firewall to get to the middle tier. Then from the middle tier go through internal firewall to get to the database, or if this is like a direct access and things like that, setting up secure network coming in through that like for administration, remote administrations, and operations.

    Then, setting up proper authentication and authentication management, configuring detail, access control and setting up multiple level of security for data accesses, not just at the table level, even at the row and column level.

    And building a complete data confidentiality by not only adding in storage encryption and all of the management of the data, even on components sitting outside the database, of course, we have Oracle components that can manage some of those for you, like sample backups, RMAN, and things like that.

    And to get this complete data confidentiality, you also add in, as we said, an efficient auditing that can then describe any issues, tell us where the problem is, how it happened. And if we set up an audit system that is very focused, then we can even tie it up to triggers, to notifications.

    So they're very quickly responded to, because the problem with audits has always been there is just way too much of them, therefore nobody ever reviews them to see exactly what has happened. So many vulnerabilities may go on detected until a major damage happens.

    And that's how you can know that this is common out there in a lot of businesses when you hear in the news. And so and so company was broken in and so much data was stolen. Well, if proper security were set up and the network is being hacked in and proper alert system and automated system were configured to be able to catch these in a proper auditing real-time, then maybe corrective action could have stopped a lot of those damages.

    20:04

    Nikita: Thanks for that wonderful overview, Ron. In our next episode, we’re going to go through each of the new security features and try to understand how Oracle is tightening the screws around security.

    Lois: And if you want to learn more about what we discussed today, visit mylearn.oracle.com and search for the Oracle Database 23ai New Features for Administrators course. Until next week, this is Lois Houston…

    Nikita: And Nikita Abraham signing off!

    20:31

    That’s all for this episode of the Oracle University Podcast. If you enjoyed listening, please click Subscribe to get all the latest episodes. We’d also love it if you would take a moment to rate and review us on your podcast app. See you again on the next episode of the Oracle University Podcast.

  • In this episode of the Oracle University Podcast, hosts Lois Houston and Nikita Abraham kick off a new season with a deep dive into the latest features of Oracle Database 23ai. Joined by Bill Millar, a Senior Principal Database & MySQL Instructor, they explore the new enhancements to blockchain tables, such as row versions, user chains, delegate signer, and countersignature. So, if you're curious about harnessing the power of blockchain tables for your database needs, this is the perfect episode for you! Oracle MyLearn: https://mylearn.oracle.com/ou/course/oracle-database-23ai-new-features-for-administrators/137192/207062 Oracle University Learning Community: https://education.oracle.com/ou-community LinkedIn: https://www.linkedin.com/showcase/oracle-university/ X: https://twitter.com/Oracle_Edu Special thanks to Arijit Ghosh, David Wright, and the OU Studio Team for helping us create this episode. -------------------------------------------------------- Episode Transcript:

    00:00

    Welcome to the Oracle University Podcast, the first stop on your cloud journey. During this series of informative podcasts, we’ll bring you foundational training on the most popular Oracle technologies. Let’s get started!

    00:26

    Lois: Hello and welcome to the Oracle University Podcast. I’m Lois Houston, Director of Innovation Programs with Oracle University, and with me is Nikita Abraham, Principal Technical Editor.

    Nikita: Hi everyone! Thank you for joining us as we begin a new season of the podcast. For the next few weeks, we’re going to explore all the new features in Oracle Database 23ai, previously known as 23c. These episodes will be great for you if you’re a database administrator, a developer, or even a database architect.

    Lois: Right Niki, and while anyone can listen to the podcast, you’re probably going to get the most out of this season if you have prior knowledge or experience with the previous versions of Oracle Database and have used SQL to manage Oracle Databases. Throughout this season, we’ll discuss new features in database availability, architecture, manageability, performance, and security.

    01:21

    Nikita: Exactly. Today, we're diving into the world of blockchain tables and the new features introduced. First, we'll try to get an overview of blockchain tables that were introduced in 21c. Then, we'll discuss the new features in 23ai, including row versions, user chains, delegate signer, and countersignature.

    Lois: So, let’s get started. To take us through all this, we are joined today by Bill Millar. Bill is a Senior Principal Database & MySQL Instructor with Oracle University. Hi Bill! Thanks for joining us. To begin, what is a blockchain table?

    01:59

    Bill: Well, a blockchain table provides the means for recording transactions where only insert operations are allowed. And rows are protected or restricted based on time as defined when the table is created. This makes the rows tamper-resistant with their chaining algorithms.

    02:16

    Nikita: Bill, take us through some common attributes of a blockchain table.

    Bill: They are append only, protects the current data in the table. Made tamper-resistant with their hashing algorithm. And optionally, they can be digitally signed. However, they are mandatory in blockchain platform transactions. Transaction logs, audit trails, compliance information, they can most benefit from using blockchain tables.

    02:44

    Lois: Bill, let’s talk for a minute about the blockchain tables being tamper-resistant. What makes a blockchain table tamper-proof?

    Bill: Well, with the insert only tables, each row is going to be chained to the previous row, except the first row. There's nothing to change it to. So once a row is added, it changes it to the previous row, to the previous row. Rows are linked when the transaction commits. We don't link them beforehand because you might roll back.

    03:13

    Nikita: Do we have some considerations or guidelines for managing blockchain tables?

    Bill: One, they may be partitioned. You can specify retention at a table level, the blockchain table itself. You can use the no drop clause. And you can also define it blockchain tables at the row level when you create that blockchain table. Defining a retention period for the table itself or a retention period for the rows.

    03:41

    Nikita: And are there any restrictions when using blockchain tables?

    Bill: There are several restrictions for the blockchain table. Some of them are… There are some data types that are not supported. The row ID, long, timestamp with time zone, and so forth. And there are other operations not allowed. A few of them are updating rows, merging rows, truncating, dropping them partitions. Converting a regular table to a blockchain table or vice versa.

    So you do want to make sure that you understand the restrictions if you decide that you're going to use a blockchain table. There are some things you can alter in a blockchain table. One is you can modify a retention period.

    It cannot be reduced. However, you can make it longer.

    04:30

    Lois: Ok, I think I’ve got it. So, coming to the 23ai features, what’s new with blockchain tables? Could you give us a brief overview of them before we dive into each one?

    Bill: So we have the user chain, just a chain of rows based off to three user-defined columns. Previously, the system defined the chain.

    The row versions…it allows me to have multiple historical views of a row that's going to be-- that is maintained with the blockchain table. We have the log history. The flashback data archive history tables are now blockchain tables.

    And there's also a countersignature. So you can request the time of signing a row that it has a signature for that. That signature metadata is going to be stored within the row, within some hidden columns. And then you can also have a delegate signer. It's an alternate to the user who is allowed to sign rows inserted by that primary user.

    05:31

    Nikita: What are some advantages of using blockchain tables?

    Bill: There are benefits of using the blockchain tables in transparent from fraud protection and users don't know as they're inserting the data. You can detect it by verifying the rows in the blockchain table. They are not part of the database itself. It can be more secure when you're validating them. And it is easier than distributed blockchains where multiple blockchains with identical data is being maintained across multiple different platforms.

    06:03

    Lois: And what about benefits specifically from the 23ai new features?

    Bill: We have allowed increased flexibility. Just the user-defined itself, instead of having it just rely on the system-defined. It can guarantee row versioning. The blockchain log history to record and protect the changes. The counter signature, along with the digital signature, can help protect it even more.

    So you must specify a version. There is no default version, so you must specify whether either it's going to be version 1 or version 2 and create the table. Version 1 is the version from 21c. You have to specify version 2 if you're going to take advantage of some of the new features in 23c.

    And with these two different versions, it does reduce the number of columns that you are going to have accessible. Version 1 uses 20 additional columns to maintain that blockchain information, whereas a version 2 blockchain table is going to use 40 additional columns. So that reduces the number of columns that you can use by 40.

    Even though version 2 does use more columns for the hidden information, it does have its benefit. It does allow you to add, drop columns. You can drop partitions with version 2. You have distributed transactions. And you can also use with replication, such as Oracle Golden Gate and Active Data Guard.

    07:32

    Nikita: Are there restrictions when it comes to using blockchain tables?

    Bill: Again, make sure that you understand the requirements of your tables when determining if blockchain table is going to be appropriate for your application or not.
    XMLTypes are not supported. Can't truncate. Doesn't work with sharded tables. Can't work with different policies such as the automatic data optimization, virtual private database, label security. Cannot use the DBMS_REDEFINITION package on a blockchain table.

    08:10

    Are you planning to become an Oracle Certified Professional this year? Whether you're a seasoned IT pro or just starting your career, getting certified can give you a significant boost. And don't worry, we've got your back! Join us at one of our cert prep live events in the Oracle University Learning Community. You'll get insider tips from seasoned experts and learn from other professionals' experiences. Plus, once you've earned your certification, you'll become part of our exclusive forum for Oracle-certified users. So, what are you waiting for? Head over to www.mylearn.oracle.com and create an account to jump-start your journey towards certification today!

    08:53

    Nikita: Welcome back! Let’s get into each of those 23ai new features, Bill. What can you tell us about the row versions feature?

    Bill: With the row version option, it allows you to have multiple historic views of a row corresponding to a set of user-defined columns. Previously, only the system would define the columns.
    When you create these, it automatically creates a view to allow you to view information about that blockchain table with the row version. The system is going to create the view with the same columns. However, the name of that view, it's going to take whatever that table name that you create and it's going to append the _Las$ onto it for that.

    And it has not only the same columns of your table, but it also has additional columns in there. One of them be that last row version. This is going to allow you to see, what is the latest version of that row? In order to use the row versions, you must specify with the row version clause when you create the table.
    It is also supported with or without primary key. The primary key column must not be identical to the set of the row version column. There are some restrictions, though. So you must specify-- you must specify a row version name with it. And remember, three columns is the maximum. You don't have to have three. You can have one, two, or three. And then the fields that are restricted to the types-- number, char, varchar, and raw.

    And it cannot be used with version 1 blockchain tables, meaning blockchain tables came out in 21C. So if you have 21C, you cannot create it. It's a 23C feature. That's why that is like that. So you're going to specify with the row version. And then you're going to give it that row version name because that is required. And then up to three different columns that you want to use.

    10:58

    Lois: What about user chains? How do they enhance blockchain tables?

    Bill: So with the user chains, previously again, only the system chains were available. It randomly selected how to change the tables, what columns to chain it with.

    Well now a user chain can be defined by the end user. And set up one, two, or three. Well, how many rows do you want to chain? Have that chain apply to. Again, the column types that we just talked about that are only supported. The number of the char, varchar, and raw. But with the user chains and you being able to identify the columns, it adds that additional flexibility to allow you to have this tamper-resistant table to be used by your applications.

    So to create that blockchain table, user chain is defined when you create the table. So you're going to define when you create the table what is going to be that chain for that. When you do create that, any rows that have the same change values will be grouped together.

    For example, let's say a banking application. I have an account. I make deposits. I make withdrawals. I do balance inquiries because that's all based off of that same field, that account, it'll group those together within the chain.

    It does apply the hashing value to the columns that are stored within that chain.

    12:27

    Lois: Bill, can you explain the blockchain table delegate signer feature?

    Bill: What it is, optionally, a signature that can be applied to provide additional security against tampering for that. However, if you do use it, it does require a digital certificate when adding a signature to a row.

    Signatures are validated using that digital certificate and any signature algorithm for that. The delegate is an alternate. And it can be used instead of addition to just a user signature. So when I am the user, I create a row, it adds my signature, I can add my certificate to it or now I can have a delegate to do that for me.

    So it can be digitally signed by the delegate. It can be signed by the delegate instead of the user itself. So that way, it's verified. Yes, that is good. Well, maybe users are not able to sign the rows they created, but they trust the delegate.

    13:32

    Nikita: And the last new feature to discuss is a blockchain table countersignature.

    Bill: A countersignature is going to provide additional guarantees that, hey, this data has been securely stored within our table itself. You can request a countersignature. It is requested at the time of signing a row. So what it's going to do is it's going to record that signature metadata in that row and the counting signature in the signed bytes that can be returned to the caller to verify, yes, that I might want to retrieve that information to use in another source for that.

    So we can use that. As we said here, that candidate signature and the sign bytes, we might put it in another data store, might put it in our Oracle blockchain platform. For this non-repudiation purposes, basically what that means is that, hey, it's proof of the origin, the authenticity of it, the integrity of that data. Well, I want to pass that information to something else, another application or source or whatever. So yes, this is trusted information for that. So it gives that additional security. So it assures that the sender that their message was delivered plus gives proof of that sender's identity.

    Countersignatures are saved in the blockchain table, that happens to be a blockchain table itself. The countersignature is computed using the bytes, using that hashing algorithm. It's going to include that end user signature, the delegate, or both. Remember, the end user can sign, a delicate can, or it can use both of that information for that. Even though we do save that information in the blockchain table, we recommend if you're going to use this, you might want to store that information outside of the database for those non-repudiation purposes.

    15:37

    Lois: Thank you so much, Bill, for taking us though all these updates. We look forward to having you back soon to talk us through some more of these new features.

    Nikita: To learn more about blockchain tables, visit mylearn.oracle.com and search for the Oracle Database 23ai: New Features for Administrators course. Join us next week for a discussion on some more Oracle Database 23ai new features. Until then, this is Nikita Abraham…

    Lois: And Lois Houston signing off!

    16:06

    That’s all for this episode of the Oracle University Podcast. If you enjoyed listening, please click Subscribe to get all the latest episodes. We’d also love it if you would take a moment to rate and review us on your podcast app. See you again on the next episode of the Oracle University Podcast.

  • Join hosts Lois Houston and Nikita Abraham, along with Hope Fisher, Oracle’s Product Manager for Database Technologies, as they break down the basics of databases, explore different database management systems, and delve into database development. Whether you're a newcomer or just need a refresher, this quick, informative episode is sure to offer you some valuable insights. Oracle MyLearn: https://mylearn.oracle.com/ou/course/database-essentials/133032/ Oracle University Learning Community: https://education.oracle.com/ou-community LinkedIn: https://www.linkedin.com/showcase/oracle-university/ X: https://twitter.com/Oracle_Edu Special thanks to Arijit Ghosh, David Wright, Radhika Banka, and the OU Studio Team for helping us create this episode. -------------------------------------------------------- Episode Transcript:

    00:00
    Welcome to the Oracle University Podcast, the first stop on your cloud journey. During this series of informative podcasts, we’ll bring you foundational training on the most popular Oracle technologies. Let’s get started!

    00:26
    Nikita: Hello and welcome to the Oracle University Podcast. I’m Nikita Abraham, Principal Technical Editor with Oracle University, and with me is Lois Houston, Director of Innovation Programs.

    Lois: Hi there! For the last seven weeks, we’ve been exploring the world of OCI Container Engine for Kubernetes with our senior instructor Mahendra Mehra. We covered key aspects of OKE to help you create, manage, and optimize Kubernetes clusters in Oracle Cloud Infrastructure. So, be sure you check out those episodes if you’re interested in Kubernetes.

    01:00

    Nikita: Today, we’re doing something a little different. We’ve had a lot of episodes on different aspects of Oracle Database, but what if you’re just getting started in this world? We wanted you to have something that you could listen to as well. And so we have Hope Fisher with us today. Hope is a Product Manager for Database Technologies at Oracle, and we’re going to ask her to take us through the basics of database, the different database management systems, and database development.

    Lois: Hi Hope! Thanks for joining us for this episode. Before we dive straight into terminologies and concepts, I want to take a step back and really get down to the basics. We sometimes use the terms data and information interchangeably, but they’re not the same, right?

    01:43

    Hope: Data is raw material or a set of facts and observations. Information is the meaning derived from the facts. The difference between data and information can be explained by using an example, such as test scores. In one class, if every student receives a numbered score and the scores can be calculated to determine a class average, the class average can be calculated to determine the school average. So in this scenario, each student's test score is one piece of data. And information is the class’s average score or the school's average score. There is no value in data until you actually do something with it.

    02:24

    Nikita: Right, so then how do we make all this data useful? Do we create a database system?

    Hope: A database system provides a simple function—treat data as a collection of information, organize it, and make the data usable by providing easy access to it and giving you a place where that data can be stored. Every organization needs to collect and maintain data to meet its requirements. Most organizations today use a database to automate their information systems. An information system can be defined as a formal system for storing and processing data.
    A database is an organized collection of data put together as a unit. The rationale of a database is to collect, store, and retrieve related data for use by database applications. A database application is a software program that interacts with the database to access and manipulate data. A database is usually managed by a Database Administrator, also known as a DBA.

    03:25

    Nikita: Hope, give us some examples of database systems.

    Hope: Popular examples of database systems include Oracle Database, MySQL, which is also owned by Oracle, Microsoft SQL server, Postgres, and others. There are relational database management systems. The acronym is DBMS. Some of the strengths of a DBMS include flexibility and scalability. Given the huge amounts of information that modern businesses need to handle, these are important factors to consider when surveying different types of databases.

    03:59

    Lois: This may seem a little bit silly, but why not just use spreadsheets, Hope? Why use databases?

    Hope: The easy answer is that spreadsheets are designed for specific problems, relatively small amounts of data and individual users. Databases are designed for lots of data, shared information use, and complex data analysis. Spreadsheets are typically used for specific problems or small amounts of data. Individual users generally use spreadsheets. In a database, cells contain records that come from external tables. Databases are designed for lots of data. They are intended to be shared and used for more complex data analysis. They need to be scalable, secure, and available to many users. This differentiation means that spreadsheets are static documents, while databases can be relational.

    04:51

    Nikita: Hope, what are some common database applications?
    Hope: Database applications are used in far and wide use cases that most commonly can be grouped into three areas.
    Applications that run companies called enterprise applications. Enterprise applications are designed to integrate computer systems that run all phases of an enterprise's operations to facilitate cooperation and coordination of work across the enterprise. The intent is to integrate core business processes, like sales, accounting, finance, human resources, inventory, and manufacturing.

    Applications that do something very specific, like healthcare applications-- specialized software is software that's written for a specific task rather than for a broad application area.
    And then there are also applications that are used to examine data and turn it into information, like a data warehouse, analytics, and data lake.

    05:54

    Lois: We’ve spoken about data lakes before. But since this is an episode about the basics of database, can you briefly tell us what a data lake is?

    Hope: A data lake is a place to store your structured and unstructured data as well as a method for organizing large volumes of highly diverse data from diverse sources. Data lakes are becoming increasingly important as people, especially in businesses and technology, want to perform broad data exploration and discovery. Bringing data together into a single place or most of it into a single place makes that simpler.

    06:29

    Nikita: Thanks for that, Hope. So, what kind of organizations use databases? And, who within these organizations uses databases the most?

    Hope: Almost every enterprise uses databases. Enterprises use databases for a variety of reasons and in a variety of ways. Data and databases are part of almost any process of the enterprise. Data is being collected to help solve business needs and drive value.

    Many people in an organization work with databases. These include the application developers who create applications that support and drive the business. The database administrator or DBA maintains and updates the database. And the end user uses the data as needed.

    07:19

    Do you want to stay ahead of the curve in the ever-evolving AI landscape? Look no further than our brand-new OCI Generative AI Professional course and certification. For a limited time only, we’re offering both the course and certification for free. So, don’t miss out on this exclusive opportunity to get certified on Generative AI at no cost. Act fast because this offer is valid only until July 31, 2024. Visit https://education.oracle.com/genai to get started. That’s https://education.oracle.com/genai.

    07:57

    Nikita: Welcome back. Now that we’ve discussed foundational database concepts, I want to move on to database management systems. Take us through what a database management system is, Hope.

    Hope: A Database Management System, DBMS, has the following elements. The kernel code manages memory and storage for the DBMS. The repository of metadata is called a data dictionary. The query language enables applications to access the data.

    Oracle database functions include data definitions, storage, structure, and security. Additional functionality also provides for user access control, backup and recovery, integrity, and communications. There are many different database types and management systems. The most common is the relational database management system.

    08:51

    Nikita: And how do relational databases store data?

    Hope: Essentially and very simplistically, there are key elements of the relational database. Database table containing rows and columns; the data in the table, which is stored a row at a time; and the columns which contain attributes or related information. And then the different tables in a database relate to one another and share a column.

    09:17

    Lois: Customers usually have a mix of applications and data structures, and ideally, they should be able to implement a data management strategy that effectively uses all of their data in applications, right? How does Oracle approach this?

    Hope: Oracle's approach to this enterprise data management strategy and architecture is converged database to all different data types and workloads.

    The converged database is a database that has native support for all modern data types and, of course, traditional relational data.

    By providing support for all of these data types, a converged database can run all sorts of workloads, from transaction processing to analytics and machine learning to blockchain to support the applications and systems.

    Oracle provides a single database engine that supports all data models, process types, and development environments. It also addresses many kinds of workloads against the same data sets. And there's no need to use dozens of specialized databases. Deploying several single-purpose databases would increase costs, complexity, and risk.

    10:25

    Nikita: In the final part of our conversation today, I want to bring up database development. Hope, how are databases developed?

    Hope: Data modeling is the first part of the database development process. Conceptual data modeling is the examination of a business and business data to determine the structure of business information and the rules that govern it. This structure forms the basis for database design. A conceptual model is relatively stable over long periods of time.
    Physical data modeling, or database building, is concerned with implementation in each technical software and hardware environment. The physical implementation is highly dependent on the current state of technology and is subject to change as available technologies rapidly change.

    Conceptual model captures the functional and informational needs of a business and is used to identify important entities and their relationships.

    A logical model includes the entities and relationships. This is also called an entity relationship model and provides the details of the relationships.

    11:34

    Lois: I think that’s a good place to wrap up our episode. To know more about the Oracle Database architecture, offerings, and so on, visit mylearn.oracle.com. Thanks for joining us today, Hope.

    Nikita: Join us next week for another episode of the Oracle University Podcast. Until then, this is Nikita Abraham…

    Lois: And Lois Houston, signing off!

    11:55

    That’s all for this episode of the Oracle University Podcast. If you enjoyed listening, please click Subscribe to get all the latest episodes. We’d also love it if you would take a moment to rate and review us on your podcast app. See you again on the next episode of the Oracle University Podcast.

  • In the season's final episode, hosts Lois Houston and Nikita Abraham interview senior OCI instructor Mahendra Mehra about the security practices that are vital for OKE clusters on OCI. Mahendra shares his expert insights on the importance of Kubernetes security, especially in today's digital landscape where the integrity of data and applications is paramount. OCI Container Engine for Kubernetes Specialist: https://mylearn.oracle.com/ou/course/oci-container-engine-for-kubernetes-specialist/134971/210836 Oracle University Learning Community: https://education.oracle.com/ou-community LinkedIn: https://www.linkedin.com/showcase/oracle-university/ X (formerly Twitter): https://twitter.com/Oracle_Edu Special thanks to Arijit Ghosh, David Wright, Radhika Banka, and the OU Studio Team for helping us create this episode. --------------------------------------------------------- Episode Transcript:

    00:00

    Welcome to the Oracle University Podcast, the first stop on your cloud journey. During this series of informative podcasts, we’ll bring you foundational training on the most popular Oracle technologies. Let’s get started!

    00:26

    Nikita: Welcome to the Oracle University Podcast! I’m Nikita Abraham, Principal Technical Editor with Oracle University, and with me is Lois Houston, Director of Innovation Programs.

    Lois: Hi there! In our last episode, we spoke about self-managed nodes and how you can manage Kubernetes deployments.

    Nikita: Today is the final episode of this series on OCI Container Engine for Kubernetes. We’re going to look at the security side of things and discuss how you can implement vital security practices for your OKE clusters on OCI, and safeguard your infrastructure and data.

    00:59

    Lois: That’s right, Niki! We can’t overstate the importance of Kubernetes security, especially in today's digital landscape, where the integrity of your data and applications is paramount. With us today is senior OCI instructor, Mahendra Mehra, who will take us through Kubernetes security and compliance practices. Hi Mahendra! It’s great to have you here. I want to jump right in and ask you, how can users add a service account authentication token to a kubeconfig file?

    Mahendra: When you set up the kubeconfig file for a cluster, by default, it contains an Oracle Cloud Infrastructure CLI command to generate a short-lived, cluster-scoped, user-specific authentication token.

    The authentication token generated by the CLI command is appropriate to authenticate individual users accessing the cluster using kubectl and the Kubernetes Dashboard. However, the generated authentication token is not appropriate to authenticate processes and tools accessing the cluster, such as continuous integration and continuous delivery tools.
    To ensure access to the cluster, such tools require long-lived non-user-specific authentication tokens. One solution is to use a Kubernetes service account. Having created a service account, you bind it to a cluster role binding that has cluster administration permissions.

    You can create an authentication token for this service account, which is stored as a Kubernetes secret. You can then add the service account as a user definition in the kubeconfig file itself. Other tools can then use this service account authentication token when accessing the cluster.

    02:47

    Nikita: So, as I understand it, adding a service account authentication token to a kubeconfig file enhances security and enables automated tools to interact seamlessly with your Kubernetes cluster. So, let’s talk about the permissions users need to access clusters they have created using Container Engine for Kubernetes.

    Mahendra: For most operations on Container Engine for Kubernetes clusters, IAM leverages the concept of groups. A user's permissions are determined by the IAM groups they belong to, including dynamic groups. The access rights for these groups are defined by policies.

    IAM provides granular control over various cluster operations, such as the ability to create or delete clusters, add, remove, or modify node pool, and dictate the Kubernetes object create, delete, view operations a user can perform. All these controls are specified at the group and policy levels.

    In addition to IAM, the Kubernetes role-based access control authorizer can enforce additional fine-grained access control for users on specific clusters via Kubernetes RBAC roles and ClusterRoles.

    04:03

    Nikita: What are Kubernetes RBAC roles and ClusterRoles, Mahendra?

    Mahendra: Roles here defines permissions for resources within a specific namespace and ClusterRole is a global object that will provide access to global objects as well as non-resource URLs, such as API version and health endpoints on the API server.
    Kubernetes RBAC also includes RoleBindings and ClusterRoleBindings. RoleBinding grants permission to subjects, which can be a user, service, or group interacting with the Kubernetes API. It specified an allowed operation for a given subject in the cluster.

    RoleBinding is always created in a specific namespace. When associated with a role, it provides users permission specified within that role related to the objects within that namespace. When associated with a ClusterRole, it provides access to namespaced objects only defined within that cluster rule and related to the roles namespace.

    ClusterRoleBinding, on the other hand, is a global object. It associates cluster roles with users, groups, and service accounts. But it cannot be associated with a namespaced role. ClusterRoleBinding is used to provide access to global objects, non-namespaced objects, or to namespaced objects in all namespaces.

    05:36

    Lois: Mahendra, what’s IAM’s role in this? How do IAM and Kubernetes RBAC work together?

    Mahendra: IAM provides broader permissions, while Kubernetes RBAC offers fine-grained control. Users authorized either by IAM or Kubernetes RBAC can perform Kubernetes operations.
    When a user attempts to perform any operation on a cluster, except for create role and create cluster role operations, IAM first determines whether a group or dynamic group to which the user belongs has the appropriate and sufficient permissions. If so, the operation succeeds. If the attempted operation also requires additional permissions granted via a Kubernetes RBAC role or cluster role, the Kubernetes RBAC authorizer then determines whether the user or group has been granted the appropriate Kubernetes role or Kubernetes ClusterRoles.

    06:41

    Lois: OK. What kind of permissions do users need to define custom Kubernetes RBAC rules and ClusterRoles?

    Mahendra: It's common to define custom Kubernetes RBAC rules and ClusterRoles for precise control. To create these, a user must have existing roles or ClusterRoles with equal or higher privileges. By default, users don't have any RBAC roles assigned. But there are default roles like cluster admin or super user privileges.

    07:12

    Nikita: I want to ask you about securing and handling sensitive information within Kubernetes clusters, and ensuring a robust security posture. What can you tell us about this?

    Mahendra: When creating Kubernetes clusters using OCI Container Engine for Kubernetes, there are two fundamental approaches to store application secrets. We can opt for storing and managing secrets in an external secrets store accessed seamlessly through the Kubernetes Secrets Store CSI driver. Alternatively, we have the option of storing Kubernetes secret objects directly in etcd.

    07:53

    Lois: OK, let’s tackle them one by one. What can you tell us about the first method, storing secrets in an external secret store?

    Mahendra: This integration allows Kubernetes clusters to mount multiple secrets, keys, and certificates into pods as volumes. The Kubernetes Secrets Store CSI driver facilitates seamless integration between our Kubernetes clusters and external secret stores.

    With the Secrets Store CSI driver, our Kubernetes clusters can mount and manage multiple secrets, keys, and certificates from external sources. These are accessible as volumes, making it easy to incorporate them into our application containers.
    OCI Vault is a notable external secrets store. And Oracle provides the Oracle Secrets Store CSI driver provider to enable Kubernetes clusters to seamlessly access secrets stored in Vault.

    08:54

    Nikita: And what about the second method? How can we store secrets as Kubernetes secret objects in etcd?

    Mahendra: In this approach, we store and manage our application secrets using Kubernetes secret objects. These objects are directly managed within etcd, the distributed key value store used for Kubernetes cluster coordination and state management.

    In OKE, etcd reads and writes data to and from block storage volumes in OCI block volume service. By default, OCI ensures security of our secrets and etcd data by encrypting it at rest. Oracle handles this encryption automatically, providing a secure environment for our secrets.

    Oracle takes responsibility for managing the master encryption key for data at rest, including etcd and Kubernetes secrets. This ensures the integrity and security of our stored secrets. If needed, there are options for users to manage the master encryption key themselves.

    10:06

    Lois: OK. We understand that managing secrets is a critical aspect of maintaining a secure Kubernetes environment, and one that users should not take lightly. Can we talk about OKE Container Image Security? What essential characteristics should container images possess to fortify the security posture of a user’s applications?

    Mahendra: In the dynamic landscape of containerized applications, ensuring the security of containerized images is paramount.

    It is not uncommon for the operating system packages included in images to have vulnerabilities. Managing these vulnerabilities enables you to strengthen the security posture of your system and respond quickly when new vulnerabilities are discovered.
    You can set up Oracle Cloud Infrastructure Registry, also known as Container Registry, to scan images in a repository for security vulnerabilities published in the publicly available Common Vulnerabilities and Exposures Database.

    11:10

    Lois: And how is this done? Is it automatic?

    Mahendra: To perform image scanning, Container Registry makes use of the Oracle Cloud Infrastructure Vulnerability Scanning Service and Vulnerability Scanning REST API. When new vulnerabilities are added to the CVE database, the container registry initiates automatic rescanning of images in repositories that have scanning enabled.

    11:41

    Do you want to stay ahead of the curve in the ever-evolving AI landscape? Look no further than our brand-new OCI Generative AI Professional course and certification. For a limited time only, we’re offering both the course and certification for free! So, don’t miss out on this exclusive opportunity to get certified on Generative AI at no cost. Act fast because this offer is valid only until July 31, 2024. Visit https://education.oracle.com/genai to get started. That’s https://education.oracle.com/genai.

    12:20

    Nikita: Welcome back! Mahendra, what are the benefits of image scanning?

    Mahendra: You can gain valuable insights into each image scan conducted over the past 13 months. This includes an overview of the number of vulnerabilities detected and an overall risk assessment for each scan. Additionally, you can delve into comprehensive details of each scan featuring descriptions of individual vulnerabilities, their associated risk levels, and direct links to the CVE database for more comprehensive information. This historical and detailed data empowers you to monitor, compare, and enhance image security over time. You can also disable image scanning on a particular repository by removing the image scanner.

    13:11

    Nikita: Another characteristic that container images should have is unaltered integrity, right?

    Mahendra: For compliance and security reasons, system administrators often want to deploy software into a production system. Only when they are satisfied that the software has not been modified since it was published compromising its integrity. Ensuring the unaltered integrity of software is paramount for compliance and security in production environment.

    13:41

    Lois: Mahendra, what are the mechanisms that guarantee this integrity within the context of Oracle Cloud Infrastructure?

    Mahendra: Image signatures play a pivotal role in not only verifying the source of an image but also ensuring its integrity. Oracle's Container Registry facilitates this process by allowing users or systems to push images and sign them using a master encryption key sourced from the OCI Vault.

    It's worth noting that an image can have multiple signatures, each associated with a distinct master encryption key. These signatures are uniquely tied to an image OCID, providing granularity to the verification process. Furthermore, the process of image signing mandates the use of an RSA asymmetric key from the OCI Vault, ensuring a robust and secure validation of the image's unaltered integrity.

    14:45

    Nikita: In the context of container images, how can users ensure the use of trusted sources within OCI?

    Mahendra: System administrators need the assurance that the software being deployed in a production system originates from a source they trust.

    Signed images play a pivotal role, providing a means to verify both the source and the integrity of the image. To further strengthen this, administrators can create image verification policies for clusters, specifying which master encryption keys must have been used to sign images. This enhances security by configuring container engine for Kubernetes clusters to allow the deployment of images signed with specific encryption keys from Oracle Cloud Infrastructure Registry. Users or systems retrieving signed images from OCIR can trust the source and be confident in the image's integrity.

    15:46

    Lois: Why is it imperative for users to use signed images from Oracle Cloud Infrastructure Registry when deploying applications to a Container Engine for Kubernetes cluster?
    Mahendra: This practice is crucial for ensuring the integrity and authenticity of the deployed images.

    To achieve this enforcement. It's important to note that an image in OCIR can have multiple signatures, each linked to a different master encryption key. This multikey association adds layers of security to the verification process.

    A cluster's image verification policy comes into play, allowing administrators to specify up to five master encryption keys. This policy serves as a guideline for the cluster, dictating which keys are deemed valid for image signatures.

    If a cluster's image verification policy doesn't explicitly specify encryption keys, any signed image can be pulled regardless of the key used. Any unsigned image can also be pulled potentially compromising the security measures.

    16:56

    Lois: Mahendra, can you break down the essential permissions required to bolster security measures within a user’s OKE clusters?

    Mahendra: To enable clusters to include master encryption key in image verification policies, you must give clusters permission to use keys from OCI Vault. For example, to grant this permission to a particular cluster in the tenancy, we must use the policy—allow any user to use keys in tenancy where request.user.id is set to the cluster's OCID.

    Additionally, for clusters to seamlessly pull signed images from Oracle Cloud Infrastructure Registry, it's vital to provide permissions for accessing repositories in OCIR.

    17:43

    Lois: I know this may sound like a lot, but OKE container image security is vital for safeguarding your containerized applications. Thank you so much, Mahendra, for being with us through the season and taking us through all of these important concepts.

    Nikita: To learn more about the topics covered today, visit mylearn.oracle.com and search for the OCI Container Engine for Kubernetes Specialist course. Join us next week for another episode of the Oracle University Podcast. Until then, this is Nikita Abraham…

    Lois Houston: And Lois Houston, signing off!

    18:16

    That’s all for this episode of the Oracle University Podcast. If you enjoyed listening, please click Subscribe to get all the latest episodes. We’d also love it if you would take a moment to rate and review us on your podcast app. See you again on the next episode of the Oracle University Podcast.

  • In this episode, hosts Lois Houston and Nikita Abraham speak with senior OCI instructor Mahendra Mehra about the capabilities of self-managed nodes in Kubernetes, including how they offer complete control over worker nodes in your OCI Container Engine for Kubernetes environment. They also explore the various options that are available to effectively manage your Kubernetes deployments. OCI Container Engine for Kubernetes Specialist: https://mylearn.oracle.com/ou/course/oci-container-engine-for-kubernetes-specialist/134971/210836 Oracle University Learning Community: https://education.oracle.com/ou-community LinkedIn: https://www.linkedin.com/showcase/oracle-university/ X (formerly Twitter): https://twitter.com/Oracle_Edu Special thanks to Arijit Ghosh, David Wright, Radhika Banka, and the OU Studio Team for helping us create this episode. -------------------------------------------------------- Episode Transcript:

    00:00

    Welcome to the Oracle University Podcast, the first stop on your cloud journey. During this series of informative podcasts, we’ll bring you foundational training on the most popular Oracle technologies. Let’s get started!

    00:26

    Nikita: Hello and welcome to the Oracle University Podcast! I’m Nikita Abraham, Principal Technical Editor with Oracle University, and with me is Lois Houston, Director of Innovation Programs.

    Lois: Hi everyone! Last week, we discussed how OKE virtual nodes can offer you a complete serverless Kubernetes experience.

    Nikita: Yeah, and in today’s episode, we’ll focus on self-managed nodes, where you get complete control over the worker nodes within your OKE environment. We’ll also talk about how you can manage your Kubernetes deployments.

    00:57

    Lois: To tell us more about this, we have Mahendra Mehra, a senior OCI instructor with Oracle University. Hi Mahendra! Welcome back! Let’s get started with self-managed nodes. Can you tell us what they are?

    Mahendra: In Container Engine for Kubernetes, a self-managed node is essentially a worker node that you personally create and host on a compute instance or instance pool within the compute service.

    Unlike managed nodes or virtual nodes, self-managed nodes are not grouped into node pools by default. They are often referred to as Bring Your Own Nodes, also abbreviated as BYON. If you wish to streamline administration and manage multiple self-managed nodes collectively, you can utilize the compute service to create a compute instance pool for hosting these nodes. This allows for greater flexibility and customization in your Kubernetes environment.

    01:58

    Nikita: Mahendra, what are some practical usage scenarios for OKE self-managed nodes?

    Mahendra: These nodes offer a range of advantages for specific use cases. Firstly, for specialized workloads, leveraging the compute service allows you to configure compute instances with shapes and image combination that may not be available for managed nodes or virtual nodes.

    This includes options like GPU shapes for hardware accelerated workloads or high frequency processor cores for demanding high-performance computing tasks. Secondly, if you require complete control over your compute instance configuration, self-managed nodes are the ideal choice. This gives you the flexibility to tailor each node to your specific requirements.

    Additionally, self-managed nodes are particularly well suited for Oracle Cloud Infrastructure cluster networks. These nodes provide high bandwidth, low latency RDMA connectivity, making them a preferred option for certain networking setups.

    Lastly, the use of compute instance pools with self-managed nodes enables the creation of infrastructure for handling complex distributed computing tasks. This can greatly enhance the efficiency of your Kubernetes environment. Consider these points carefully to determine the optimal use of OKE self-managed nodes in your deployments.

    03:30

    Lois: What do we need to consider before creating a self-managed node and integrating it into a cluster?

    Mahendra: There are two crucial aspects to address. Firstly, you need to confirm that the cluster to which you plan to add a self-managed node is configured appropriately.

    Secondly, it's essential to choose the right image for the compute instance hosting the self-managed node.

    03:53

    Nikita: Can you dive a little deeper into these prerequisites?

    Mahendra: To successfully integrate a self-managed node into your cluster, you must ensure that the cluster is an enhanced cluster. This is a crucial prerequisite for the addition of self-managed nodes.

    The flannel CNI plugin for pod networking should be utilized, not the VCN-native pod networking CNI plugin. This ensures optimal pod networking for your self-managed nodes. The control plane nodes of the cluster must be running Kubernetes version 1.25 or later. This is essential for compatibility and optimal performance.

    Lastly, maintain compatibility between the Kubernetes version on control plane nodes and worker nodes with a maximum allowable difference of two minor versions. This ensures a smooth and stable operation of your Kubernetes environment. Keep these cluster requirements in mind as you prepare to add self-managed nodes to your OKE cluster.

    04:55

    Lois: What about the image requirements when creating self-managed nodes?

    Mahendra: Choose either Oracle Linux 7 or Oracle Linux 8 image, for your self-managed nodes. Ensure that the selected image has a release date of March 28, 2023 or later.

    Obtain the image OCID, also known as Oracle Cloud Identifier, from the respective sources. When specifying an image, be mindful of the Kubernetes version it contains.

    It's your responsibility to select an image with a Kubernetes version that aligns with the Kubernetes version skew support policy. Keep in mind that the Container Engine for Kubernetes does not automatically check the compatibility. So it's up to you to ensure harmony between the Kubernetes version on the self-managed node and the cluster's control plane nodes. These considerations will help you make informed choices when configuring images for your self-managed nodes.

    05:57

    Nikita: I really like the flexibility and customization OKE self-managed nodes offer. Now I want to switch gears a little and ask you about OCI Service Operator for Kubernetes. Can you tell us a bit about it?

    Mahendra: OCI Service Operator for Kubernetes is an open-source Kubernetes add-on that transforms the way we manage and connect OCI resources within our Kubernetes clusters. This powerful operator enables you to effortlessly create, configure, and interact with OCI resources directly from your Kubernetes environment, eliminating the need for constant navigation between the Oracle Cloud Infrastructure Console, CLI, or other tools. With the OCI Service Operator, you can seamlessly leverage kubectl to call the operator framework APIs, providing a streamlined and efficient workflow.

    06:53

    Lois: On what framework is the OCI Service Operator built?

    Mahendra: OCI Service Operator for Kubernetes is built using the open-source Operator Framework toolkit. The Operator Framework manages Kubernetes-native applications called operators in an effective, automated, and scalable way. The Operator Framework comprises essential components like Operator SDK. This leverages the Kubernetes controller-runtime library, providing high-level APIs and abstractions for writing operational logic. Additionally, it offers tools for scaffolding and code generation.

    07:35

    Do you want to stay ahead of the curve in the ever-evolving AI landscape? Look no further than our brand-new OCI Generative AI Professional course and certification. For a limited time only, we’re offering both the course and certification for free! So, don’t miss out on this exclusive opportunity to get certified on Generative AI at no cost. Act fast because this offer is valid only until July 31, 2024. Visit https://education.oracle.com/genai to get started. That’s https://education.oracle.com/genai.

    08:14

    Nikita: Welcome back! Mahendra, are there any other components within OCI Service Operator to manage Kubernetes deployments?

    Mahendra: The other essential component is Operator Lifecycle Manager, also abbreviated as OLM. OLM extends Kubernetes by introducing a declarative approach to install, manage, and upgrade operators within a cluster. The OCI Service Operator for Kubernetes is intelligently packaged as an Operator Lifecycle Manager bundle, simplifying the installation process on Kubernetes clusters. This comprehensive bundle encapsulates all necessary objects and definitions, including CRDs, RBACs, ConfigMaps, and deployments, making it effortlessly deployable on a cluster.

    09:02

    Lois: So much that users can take advantage of! What about OCI Service Operator’s integration with other OCI services?

    Mahendra: One of its standout features is its seamless integration with a range of OCI services. The first one is Autonomous Database, specifically tailored for transaction processing, mixed workloads, analytics, and data warehousing. Enjoy automated patching, upgrades, and tuning, allowing routine maintenance tasks to be performed without human intervention.

    The next on the list is MySQL HeatWave, a fully-managed Database Service designed for developing and deploying secure cloud-native applications using widely adopted MySQL open-source database. Third on the list is OCI Streaming service. Experience a fully managed, scalable, and durable solution for ingesting and consuming high-volume data streams in real time.

    Next is Service Mesh. This service offers a set of capabilities to facilitate communication among microservices within a cloud-native application. The communication is centrally managed and secured, ensuring a smooth and secure interaction. The OCI Service Operator for Kubernetes serves as a versatile bridge, seamlessly connecting your Kubernetes clusters with these powerful Oracle Cloud Infrastructure services.

    10:31

    Nikita: That’s awesome! I’ve also heard about Ingress Controllers. Can you tell us what they are?

    Mahendra: A Kubernetes Ingress Controller serves as the enforcer of rules defined in a Kubernetes Ingress. Its primary role is to manage, load balance, and route incoming traffic to specific service pods residing on worker nodes within the cluster. At the heart of this process is the Kubernetes Ingress Resource. Think of it as a blueprint, a rich configuration holding routing rules and options, specifically crafted for handling HTTP and HTTPS traffic. It serves as a powerful orchestrator for managing external communication with services inside the cluster.

    11:15

    Lois: Mahendra, how do Ingress Controllers bring about efficiency?

    Mahendra: Efficiency comes with consolidation. With a single ingress resource, you can neatly gather routing rules for multiple services. This eliminates the need to create a Kubernetes service of type LoadBalancer for each service seeking external or private network traffic.

    The OCI native ingress controller is a powerhouse. It crafts an OCI Flexible Load Balancer, your gateway to efficient request handling. The OCI native ingress controller seamlessly adapts to changes in routing rules with real-time updates.

    11:53

    Nikita: And what about integration with an OKE cluster?

    Mahendra: Absolutely. It harmonizes with the cluster for streamlined traffic management. Operating as a single pod on a randomly selected worker node, it ensures a balanced workload distribution.

    12:08

    Lois: Moving on, let’s talk about running applications on ARM-based nodes and GPU nodes. We’ll start with ARM-based nodes.

    Mahendra: Typically, developers use ARM-based worker nodes in Kubernetes cluster to develop and test applications. Selecting the right infrastructure is crucial for optimal performance.

    12:28

    Nikita: What kind of options do developers have when running applications on ARM-based nodes?

    Mahendra: When it comes to running applications on ARM-based nodes, you have a range of options at your fingertips. First up, consider the choice between ARM-based bare metal shapes and flexible VM shapes. Each comes with its own unique advantages.

    Now, let's talk about the heart of it all, the Ampere A1 Compute instances. These instances are driven by the cutting edge Ampere Altra processor, ensuring high performance and efficiency for your workloads. You must specify the ARM-based node pool shapes during cluster or node pool creation, whether you choose to navigate through the user-friendly console, leverage the flexibility of the API, or command with precision through the CLI, the process remains seamless.

    13:23

    Lois: Can you define pods to run exclusively on ARM-based nodes within a heterogeneous cluster setup?

    Mahendra: In scenarios where a cluster comprises node pools with ARM-based shapes alongside other shapes, such as AMD64, you can employ a powerful tool called node selector in the pod specification. This allows you to precisely dictate that an application should exclusively run on ARM-based worker nodes, ensuring your workloads aligns with the desired architecture.

    13:55

    Nikita: And before we end this episode, can you explain why developers must run applications on GPU nodes?

    Mahendra: Originally designed for graphics manipulations, GPUs prove highly efficient in parallel data processing. This makes them a top choice for deploying data-intensive applications. Our GPU nodes utilize cutting edge NVIDIA graphics cards ensuring efficient and powerful data processing. Seamless access to this computing prowess is made possible through CUDA libraries.

    To ensure smooth integration, be sure to select a GPU shape and opt for an Oracle Linux GPU image preloaded with the essential CUDA libraries. CUDA here is Compute Unified Device Architecture, which is a parallel computing platform and application-programming interface model created by NVIDIA. It allows developers to use NVIDIA graphics-processing units for general-purpose processing, rather than just rendering graphics.

    14:57

    Nikita: Thank you, Mahendra, for another insightful session. We appreciate you joining us today.

    Lois: For more information on everything we discussed, go to mylearn.oracle.com and search for the OCI Container Engine for Kubernetes Specialist course. You’ll find plenty of demos and skill checks to supplement your learning. Join us next week when we’ll discuss vital security practices for your OKE clusters on OCI. Until next time, this is Lois Houston…

    Nikita: And Nikita Abraham, signing off!

    15:28

    That’s all for this episode of the Oracle University Podcast. If you enjoyed listening, please click Subscribe to get all the latest episodes. We’d also love it if you would take a moment to rate and review us on your podcast app. See you again on the next episode of the Oracle University Podcast.

  • Want to gain insights into how virtual nodes provide a serverless Kubernetes experience? Join hosts Lois Houston and Nikita Abraham, along with senior OCI instructor Mahendra Mehra, as they compare managed nodes and virtual nodes. Continuing from the previous episode, they explore how virtual nodes enhance Kubernetes deployments in Oracle Cloud Infrastructure. OCI Container Engine for Kubernetes Specialist: https://mylearn.oracle.com/ou/course/oci-container-engine-for-kubernetes-specialist/134971/210836 Oracle University Learning Community: https://education.oracle.com/ou-community LinkedIn: https://www.linkedin.com/showcase/oracle-university/ X (formerly Twitter): https://twitter.com/Oracle_Edu Special thanks to Arijit Ghosh, David Wright, Radhika Banka, and the OU Studio Team for helping us create this episode. -------------------------------------------------------- Episode Transcript:

    00:00

    Welcome to the Oracle University Podcast, the first stop on your cloud journey. During this series of informative podcasts, we’ll bring you foundational training on the most popular Oracle technologies. Let’s get started!

    00:25

    Lois: Welcome to the Oracle University Podcast! I’m Lois Houston, Director of Innovation Programs with Oracle University, and with me is Nikita Abraham, Principal Technical Editor.

    Nikita: Hey everyone! In our last episode, we examined OCI Container Engine for Kubernetes, including its key features and benefits.

    Lois: Yeah, that was an interesting one. Today, we’re going to discuss virtual nodes and their role in enhancing Kubernetes deployments in Oracle Cloud Infrastructure.

    Nikita: We’re going to compare virtual nodes and managed nodes, and look at their differences and advantages. To take us through all this, we have Mahendra Mehra with us. Mahendra is a senior OCI instructor with Oracle University.

    01:09

    Lois: Hi Mahendra! From our discussion last week, we know that when creating a node pool with Container Engine for Kubernetes, we have the option of specifying the type of Oracle nodes as either managed nodes or virtual nodes. But I’m sure there are some key differences in the features supported by each type, right?

    Mahendra: The primary point of differentiation between virtual nodes and managed nodes is in their management approach. When it comes to managed nodes, users are responsible for managing the nodes. They have the flexibility to configure them to meet the specific requirements.

    Users are also responsible for upgrading Kubernetes on managed nodes and for managing cluster capacity. You can create managed nodes and node pools in both basic clusters and enhanced clusters, whereas in virtual nodes, virtual nodes provide a serverless Kubernetes, experience, enabling users to run containerized applications at scale. The Kubernetes software is upgraded and security patches are applied while respecting application's availability requirements.
    You can only create virtual nodes and virtual node pools in enhanced clusters.

    02:17

    Nikita: What about differences in terms of resource allocation? Are there any differences we should be aware of?

    Mahendra: When it comes to managed nodes, the resource allocation is at the node pool level and the users specify CPU and memory resource requirements for a given node pool. In the virtual nodes, the resource allocation is done at the pod level, where you can specify the CPU and memory resource requirements, but this time, as requests and limits in the pod specification.

    02:45

    Lois: What about differences in the approach to load balancing?

    Mahendra: When it comes to managed nodes, load balancing is between the worker nodes, whereas in virtual nodes, load balancing is between pods.

    Also, load balancer security list management is never enabled, and you always must manually configure security rules. When using virtual nodes, load balances distribute traffic among pods' IP addresses and then assign node port.

    03:12

    Lois: And when it comes to pod networking?

    Mahendra: Under managed nodes, both the VCN-Native Pod Networking CNI plugin and the flannel CNI plugin are supported. When it comes to virtual nodes, only VCN-Native Pod Networking is supported.

    Also, only one VNIC is attached to each virtual node. Remember, IP addresses are not pre-allocated before pods are created. And the VCN-Native Pod Networking CNI plugin is not shown as running in the kube-system namespace. Pod subnet route tables must have route rules defined for a NAT gateway and a service gateway.

    03:48

    Nikita: OK… I have a question, Mahendra. When it comes to scaling Kubernetes clusters and node pools, can users adjust the cluster capacity in response to their changing requirements?

    Mahendra: When it comes to managed nodes, customers can scale the cluster and node pool up and down by changing the number of managed node pools and nodes respectively. They also have an option to enable autoscaling to automatically scale managed node pools and pods.

    When it comes to virtual nodes, operational overhead of cluster capacity management is handled for you by OCI. A virtual node pool scales automatically and can support up to 1000 pods per virtual node. Users also have an option to increase the number of virtual node pools or virtual nodes to scale up the cluster or node pool respectively.

    04:37

    Lois: And what about the pricing for each?

    Mahendra: Under managed nodes, you pay for the compute instances that execute applications, whereas under virtual nodes, you pay for the exact compute resources consumed by each Kubernetes pod.

    04:55

    Do you want to stay ahead of the curve in the ever-evolving AI landscape? Look no further than our brand-new OCI Generative AI Professional course and certification. For a limited time only, we’re offering both the course and certification for free! So, don’t miss out on this exclusive opportunity to get certified on Generative AI at no cost. Act fast because this offer is valid only until July 31, 2024. Visit https://education.oracle.com/genai to get started. That’s https://education.oracle.com/genai.

    05:34

    Nikita: Welcome back! We were just discussing how when you have to choose between virtual nodes and managed nodes for your Kubernetes cluster, you need to consider several key points of differentiation, like the management approach, resource allocation, load balancing, pod networking, scaling, and pricing.

    Lois: Yeah, it’s important to understand the benefits and drawbacks of each approach to make informed decisions. Mahendra, now let’s talk about the prerequisites to configure clusters with virtual nodes and the IAM policies that are required to use virtual nodes.

    Mahendra: Before you can use virtual nodes, you always have to set up at least one IAM policy, which is required in all circumstances by both tenancy administrators and non-administrator users.

    This basically means, to create and use clusters with virtual nodes and virtual node pools, you must endorse Container Engine for Kubernetes service to allow virtual nodes to create container instances in the Container Engine for Kubernetes service tenancy with a VNIC connected to a subnet of a VCN in your tenancy.

    All you need to do is create a policy in the root compartment with policy statements from the official documentation page. You will find them under the Working with Virtual Nodes section within the Container Engine topic.

    06:55

    Lois: Mahendra, how do you create and configure virtual nodes and virtual node pools?

    Mahendra: Creating virtual nodes is a pivotal step and it involves setting up a virtual node pool in a new cluster. This is exclusively applicable to enhanced clusters. You can initiate this process using the console, the CLI, or the API.

    Configuring your virtual node pools involves defining critical parameters. Firstly, we have the node count. This represents the number of virtual nodes you wish to create within your virtual node pool. These nodes will be strategically placed in the availability domains that you specify.

    Now, it's important to carefully consider the placement of these nodes. You can distribute them across different availability domains, ensuring high availability for your applications. Additionally, you have the option to place these nodes in a regional subnet, which is the recommended approach for optimal performance.

    07:53

    Nikita: Isn’t the pod shape another important parameter? Can you tell us a bit about it?

    Mahendra: Pod shape refers to the type of shape you want for pods running on your virtual nodes within the virtual node pool. The pod shape is crucial as it determines the processor type on which you want your pods to run. It is important to note that only shapes available in your tenancy and supported by Container Engine for Kubernetes will be shown. So choose a shape that aligns with the requirements of your applications and services.
    A noteworthy point is that you explicitly specify the CPU and memory resource requirements for virtual nodes in the pod specification file. This ensures that your virtual nodes have the necessary resources to handle the workloads of your applications. Precision in specifying these requirements is key to achieving optimal performance.

    08:49

    Lois: What is the network setup for virtual nodes?

    Mahendra: The pod running on virtual nodes utilize VCN-native pod networking, and it's crucial to specify how these pods in the node pool communicate with each other. This involves setting up a pod subnet, which is a regional subnet configured specially to host pods.

    The pod subnet you specify for virtual nodes must be private. Oracle recommends that the pod subnet and the virtual node subnets are the same. In addition to subnet configurations, you have the option to use security rules in network security group to control access to the pod subnet. This involves defining security rules within one or more NSGs that you specify with a maximum limit of five network security groups.

    Also, it is worth noting that using network security group is recommended over using security list. Now, let's shift our focus to virtual node communication. For this, you will configure a virtual node subnet. This subnet can be either a regional subnet, which is recommended, or an availability domain-specific subnet. And it's designed to host your virtual nodes.

    10:02

    Nikita: What are some key considerations for virtual node subnets?

    Mahendra: If you've specified load balancer subnets, ensure that the virtual node subnets are different. As with pod communication, Oracle recommends that the pod subnet and the virtual node subnet are the same, with the added condition that the virtual node subnet must be private.

    10:23

    Lois: Mahendra, can you take us through the fundamental tasks involved in managing virtual nodes and virtual node pools?

    Mahendra: Whether you're creating a new enhanced cluster using the Console, or looking to scale up an existing one, the creation process is versatile.

    Creating virtual nodes involves establishing a virtual node pool. Virtual nodes can only be created within enhanced clusters. Listing virtual nodes task offers visibility into virtual nodes within a virtual node pool. Whether you prefer Console, CLI, or the API, you have the flexibility to choose the method that suits your workflow best.

    For a comprehensive understanding of your virtual node pools, navigate to the Cluster List page, and click on the name of the cluster. This will unveil the specifics of the virtual node pool you are interested in.

    Now let's talk about updating virtual node pools. Whether your initiating a new enhanced cluster, or expanding an existing one, the update process ensures your cluster aligns with your evolving requirements. You can easily update the virtual node pool’s name for clarity. You can also dynamically change the number of virtual nodes to meet the workload demands, and you can fine tune the Node Placement using options like Availability Domain and Fault Domain settings.

    Moving on to an essential aspect of node pool management, that is deletion. It's crucial to understand that deleting a node pool is a permanent action. Once deleted, the node pool cannot be recovered.

    12:04

    Lois: Before we wrap up, Mahendra, can you talk about the critical factors when allocating CPU, memory, and storage resources to pods provisioned by virtual nodes within your OKE cluster?

    Mahendra: To ensure optimal performance, OKE calculates CPU and memory allocations at the pod level, a distinctive feature when using virtual nodes. This approach stands in contrast to the traditional worker node-level allocation.

    The allocation process takes into account several factors. First one is the CPU and memory requests and limits. These are specified for each container in the pod spec file, if present. Secondly, number of containers in the pod. The total number of containers impacts the overall resource requirements. And kube-proxy and container runtime requirements. A small but essential consideration taking up 0.25 GB of memory and negligible CPU. Pod CPU and memory requests must meet a minimum of 0.125 OCPUs and 0.5 GB of memory.

    13:12

    Nikita: Thank you, Mahendra, for this really insightful session. If you’re interested in learning more about the topics we discussed today, head over to mylearn.oracle.com and search for the OCI Container Engine for Kubernetes Specialist course.

    Lois: You’ll find demos that you watch as well as skill checks that you can attempt to better your understanding. In our next episode, we’ll journey into the world of self-managed nodes and discuss how to manage Kubernetes deployments. Until then, this is Lois Houston…

    Nikita: And Nikita Abraham, signing off!

    13:45

    That’s all for this episode of the Oracle University Podcast. If you enjoyed listening, please click Subscribe to get all the latest episodes. We’d also love it if you would take a moment to rate and review us on your podcast app. See you again on the next episode of the Oracle University Podcast.

  • Curious about how OCI Container Engine for Kubernetes (OKE) can transform the way your development team builds, deploys, and manages cloud-native applications? Listen to hosts Lois Houston and Nikita Abraham explore OKE's key features and benefits with senior OCI instructor Mahendra Mehra. Mahendra breaks down complex concepts into digestible bits, making it easy for you to understand the magic behind OKE. OCI Container Engine for Kubernetes Specialist: https://mylearn.oracle.com/ou/course/oci-container-engine-for-kubernetes-specialist/134971/210836 Oracle University Learning Community: https://education.oracle.com/ou-community LinkedIn: https://www.linkedin.com/showcase/oracle-university/ X (formerly Twitter): https://twitter.com/Oracle_Edu Special thanks to Arijit Ghosh, David Wright, Radhika Banka, and the OU Studio Team for helping us create this episode. -------------------------------------------------------- Episode Transcript:

    00:00

    Welcome to the Oracle University Podcast, the first stop on your cloud journey. During this series of informative podcasts, we’ll bring you foundational training on the most popular Oracle technologies. Let’s get started!

    00:25

    Nikita: Hello and welcome to the Oracle University Podcast. I’m Nikita Abraham, Principal Technical Editor with Oracle University, and with me is Lois Houston, Director of Innovation Programs.

    Lois: Hi there! If you’ve been listening to us these last few weeks, you’ll know we’ve been discussing containerization, the Oracle Cloud Infrastructure Registry, and the basics of Kubernetes. Today, we’ll dive into the world of OCI Container Engine for Kubernetes, also referred to as OKE.

    Nikita: We’re joined by Mahendra Mehra, a senior OCI instructor with Oracle University, who will take us through the key features and benefits of OKE and also talk about working with managed nodes. Hi Mahendra! Thanks for joining us today.

    01:09

    Lois: So, Mahendra, what is OKE exactly?

    Mahendra: Oracle Cloud Infrastructure Container Engine for Kubernetes is a fully managed, scalable, and highly available service that empowers you to effortlessly deploy your containerized applications to the cloud.

    But that's just the beginning. OKE can transform the way you and your development team build, deploy, and manage cloud native applications.

    01:36

    Nikita: What would you say are some of its most defining features?

    Mahendra: One of the defining features of OKE is the flexibility it offers. You can specify whether you want to run your applications on virtual nodes or opt for managed nodes.

    Regardless of your choice, Container Engine for Kubernetes will efficiently provision them within your existing OCI tenancy on Oracle Cloud Infrastructure.

    Creating OKE cluster is a breeze, and you have a couple of fantastic tools at your disposal-- the console and the rest API. These make it super easy to get started.

    OKE relies on Kubernetes, which is an open-source system that simplifies the deployment, scaling, and management of containerized applications across clusters of hosts.

    Kubernetes is an incredible system that groups containers into logical units known as pods. And these pods make managing and discovering your applications very simple.

    Not to mention, Container Engine for Kubernetes uses Kubernetes versions that are certified as conformant by the Cloud Native Computing Foundation, also abbreviated as CNCF. And here's the icing on the cake. Container Engine for Kubernetes is ISO-compliant. The other two ISO-IEC standards—27001, 27017, and 27018. That's your guarantee of a secure and reliable platform.

    03:08

    Lois: That’s great. But how do you access all this power?

    Mahendra: You can define and create your Kubernetes cluster using the intuitive console and the robust rest API. Once your clusters are up and running, you can manage them using the Kubernetes command line, also known as kubectl, the user-friendly Kubernetes dashboard, and the powerful Kubernetes API.

    03:32

    Nikita: I love the idea of an intuitive console and being able to manage everything from a centralized place.

    Lois: Yeah, that’s fantastic! Mahendra, can you talk us through the magic that happens behind the scenes? What’s Oracle’s role in all this?

    Mahendra: All the master nodes or control plane nodes are managed by Oracle. This includes components like etcd, the API server, and the controller manager among others. To ensure reliability, we make sure multiple copies of these master components are distributed across different availability domains.

    And we don't stop there. We also manage the Kubernetes dashboard and even handle the self-healing mechanism of both the cluster and the worker nodes. All of these are meticulously created and managed within your Oracle tenancy.

    04:19

    Lois: And what happens at the user’s end? What is their responsibility?

    Mahendra: At your end, you have the power to manage your worker nodes. Using different compute shapes, you can create and control them in your own user tenancy. So, as you can see, it's a perfect blend of Oracle's expertise and your control.

    04:38

    Nikita: So, in your opinion, why should users consider OKE their go-to solution for all things Kubernetes?

    Mahendra: Imagine a world where building and maintaining Kubernetes environments, be it master nodes or worker nodes, is no longer complex, costly, or even time-consuming.
    OKE is here to make your life easier by seamlessly integrating Kubernetes with various container life cycle management products, which includes container registries, CI/CD frameworks, networking solutions, storage options, and top-notch security features.

    And speaking of security, OKE gives you the tools you need to manage and control team access to production clusters, ensuring granular access to Kubernetes cluster in a straightforward process.

    It empowers developers to deploy containers quickly, provides devops teams with visibility and control for seamless Kubernetes management, and brings together Kubernetes container orchestration with Oracle's advanced cloud infrastructure. This results in robust control, top tier security, IAM, and consistent performance.

    05:50

    Nikita: OK…a lot of benefits! Mahendra, I know there have been ongoing enhancements to the OKE service. So, when creating a new cluster with Container Engine for Kubernetes, what is the cluster type we should specify?

    Mahendra: The first type is the basic clusters. Basic clusters support all the core functionality provided by Kubernetes and Container Engine for Kubernetes. Basic clusters come with a service-level objective, but not a financially backed service level agreement. This means that Oracle guarantees a certain level of availability for the basic cluster, but there is no monetary compensation if that level is not met.

    On the other hand, we have the enhanced clusters. Enhanced clusters support all available features, including features not supported by basic clusters.

    06:38

    Lois: OK. So, can you tell us more about the features supported by enhanced clusters?

    Mahendra: As we move towards a more digitized world, the demand for infrastructure continues to rise. However, with virtual nodes, managing the infrastructure of your cluster becomes much simpler. The burden of manually scaling, upgrading, or troubleshooting worker nodes is removed, giving you more time to focus on your applications rather than the underlying infrastructure. Virtual nodes provide a great solution for managing large clusters with a high number of nodes that require frequent updates or scaling.

    With this feature, you can easily simplify the management of your cluster and focus on what really matters, that is your applications. Managing cluster add-ons can be a daunting task. But with enhanced clusters, you can now deploy and configure them in a more granular way. This means that you can manage both essential add-ons like CoreDNS and kube-proxy as well as a growing portfolio of optional add-ons like the Kubernetes Dashboard.

    With enhanced clusters, you have complete control over the add-ons you install or disable, the ability to select specific add-on versions, and the option to opt-in or opt-out of automatic updates by Oracle. You can also manage add-on specific customizations to tailor your cluster to meet the needs of your application.

    08:05

    Lois: Do users need to worry about deploying add-ons themselves?

    Mahendra: Oracle manages the lifecycle of add-ons so that you don't have to worry about deploying them yourself. This level of control over add-ons gives you the flexibility to customize your cluster to meet the unique needs of your applications, making managing your cluster a breeze.

    08:25

    Lois: What about scaling?

    Mahendra: Scaling your clusters to meet the demands of your workload can be a challenging task. However, with enhanced clusters, you can now provision more worker nodes in a single cluster, allowing you to deploy larger workloads on the same cluster which can lead to better resource utilization and lower operational overhead. Having fewer larger environments to secure, monitor, upgrade, and manage is generally more efficient and can help you save on cost.

    Remember, there are limits to the number of worker nodes supported on an enhanced cluster, so you should review the Container Engine for Kubernetes limits documentation and consider the additional considerations when defining enhanced clusters with large number of managed nodes.

    09:09

    Nikita: Ensuring the security of my cluster would be of utmost importance to me, right? How would I do that with enhanced clusters?

    Mahendra: With enhanced clusters, you can now strengthen cluster security through the use of workload identity. Workload identity enables you to define OCI IAM policies that authorize specific pods to make OCI API calls and access OCI resources. By scoping the policies to Kubernetes service account associated with application pods, you can now allow the applications running inside those pods to directly access the API based on the permissions provided by the policies.

    09:48

    Nikita: Mahendra, what type of uptime and server availability benefits do enhanced clusters provide?

    Mahendra: You can now rely on a financially backed service level agreement tied to Kubernetes API server uptime and availability. This means that you can expect a certain level of uptime and availability for your Kubernetes API server, and if it degrades below the stated SLA, you'll receive compensation. This provides an extra level of assurance and helps ensure that your cluster is highly available and performant.

    10:20

    Lois: Mahendra, do you have any tips for us to remember when creating basic and enhanced clusters?

    Mahendra: When using the console to create a cluster, a new cluster is created as an enhanced cluster by default unless you explicitly choose to create a basic cluster. If you don't select any enhanced features during cluster creation, you have the option to create the new cluster as a basic cluster. When using CLI or API to create a cluster, you can specify whether to create a basic cluster or an enhanced cluster. If you don't explicitly specify the type of cluster to create, a new cluster is created as a basic cluster by default.

    Creating a new cluster as an enhanced cluster enables you to easily add enhanced features later even if you didn't select any enhanced features initially. If you do choose to create a new cluster as a basic cluster, you can still choose to upgrade the basic cluster to an enhanced cluster later on. However, you cannot downgrade an enhanced cluster to a basic cluster. These points are really important while you consider selection of a basic cluster or an enhanced cluster for your usage.

    11:34

    Do you want to stay ahead of the curve in the ever-evolving AI landscape? Look no further than our brand-new OCI Generative AI Professional course and certification. For a limited time only, we’re offering both the course and certification for free! So, don’t miss out on this exclusive opportunity to get certified on Generative AI at no cost. Act fast because this offer is valid only until July 31, 2024. Visit https://education.oracle.com/genai to get started. That’s https://education.oracle.com/genai.

    12:13

    Nikita: Welcome back! I want to move on to serverless Kubernetes with virtual nodes. But I think before we do that, we first need to have a basic understanding of what managed nodes are.

    Mahendra: Managed nodes run on compute instances within your tenancy, and are at least partly managed by you. In the context of Kubernetes, a node is a compute host that can be either a virtual machine or a bare metal host.
    As you are responsible for managing managed nodes, you have the flexibility to configure them to meet your specific requirements. You are responsible for upgrading Kubernetes on managed nodes and for managing cluster capacity. Nodes are responsible for running a collection of pods or containers, and they are comprised of two system components: the kubelet, which is the host brain, and the container runtime such as CRI-O, or containerd.

    13:07

    Nikita: Ok… so what are virtual nodes, then?

    Mahendra: Virtual nodes are fully managed and highly available nodes that look and act like real nodes to Kubernetes. They are built using the open source CNCF Virtual Kubelet Project, which provides the translation layer between OCI and Kubernetes.

    13:25

    Lois: So, what makes Oracle’s managed virtual Kubernetes product different?

    Mahendra: OCI is the first major cloud provider to offer a fully managed virtual Kubelet product that provides a serverless Kubernetes experience through virtual nodes. Virtual nodes are configured by customers and are located within a single availability and fault domain within OCI.

    Virtual nodes have two main components: port management and container instance management. Virtual nodes delegates all the responsibility of managing the lifecycle of pods to virtual Kubernetes while on a managed node, the kubelet is responsible for managing all the lifecycle state.

    The key distinction of virtual nodes is that they support up to a 1,000 pods per virtual node with the expectation of supporting more in the future.

    14:15

    Nikita: What are the other benefits of virtual nodes?

    Mahendra: Virtual nodes offer a fully managed experience where customers don't have to worry about managing the underlying infrastructure of their containerized applications. Virtual nodes simplifies scaling patterns for customers. Customers can scale their containerized application up or down quickly without worrying about the underlying infrastructure, and they can focus solely on their applications.

    With virtual nodes, customers only pay for the resources that their containerized application use. This allows customers to optimize their costs and ensures that they are not paying for any unused resources. Virtual nodes can support over 10 times the number of pods that a normal node can. This means that customer can run more containerized applications on virtual nodes, which reduces operational burden and makes it easier to scale applications.

    Customers can leverage container instances in serverless offering from OCI to take advantage of many OCI functionalities natively. These functionalities include strong isolation and ultimate elasticity with respect to compute capacity.

    15:26

    Lois: When creating a cluster using Container Engine for Kubernetes, we have the flexibility to customize the worker nodes within the cluster, right? Could you tell us more about this customization?

    Mahendra: This customization includes specifying two key elements. Firstly, you can select the operating system image to be used for worker nodes. This image serves as a template for the worker node's virtual hard drive, and determines the operating system and other software installed.

    Secondly, you can choose the shape for your worker nodes. The shape defines the number of CPUs and the amount of memory allocated to each instance, ensuring it meets your specific requirements. This customization empowers you to tailor your OKE cluster to your exact needs.

    It is important to note that you can define and create OKE clusters using both the console and the REST API. This level of control is specially valuable for your development team when building, deploying, and managing cloud native applications.
    You have the option to specify whether applications should run on virtual nodes or managed nodes. And Container Engine for Kubernetes efficiently provisions them on Oracle Cloud Infrastructure within your existing OCI tenancy. This flexibility ensures that you can adapt your OKE cluster to suit the specific requirements of your projects and workloads.

    16:56

    Lois: Thank you so much, Mahendra, for giving us your time today. For more on the topics we discussed, visit mylearn.oracle.com and look for the OCI Container Engine for Kubernetes Specialist course. Join us next week as we dive deeper into working with OKE virtual nodes. Until then, this is Lois Houston…

    Nikita: And Nikita Abraham, signing off!

    17:18

    That’s all for this episode of the Oracle University Podcast. If you enjoyed listening, please click Subscribe to get all the latest episodes. We’d also love it if you would take a moment to rate and review us on your podcast app. See you again on the next episode of the Oracle University Podcast.

  • In this episode, Lois Houston and Nikita Abraham, along with senior OCI instructor Mahendra Mehra, dive into the fundamentals of Kubernetes. They talk about how Kubernetes tackles challenges in deploying and managing microservices, and enhances software performance, flexibility, and availability. OCI Container Engine for Kubernetes Specialist: https://mylearn.oracle.com/ou/course/oci-container-engine-for-kubernetes-specialist/134971/210836 Oracle University Learning Community: https://education.oracle.com/ou-community LinkedIn: https://www.linkedin.com/showcase/oracle-university/ X (formerly Twitter): https://twitter.com/Oracle_Edu Special thanks to Arijit Ghosh, David Wright, Radhika Banka, and the OU Studio Team for helping us create this episode. -------------------------------------------------------- Episode Transcript:

    00:00

    Welcome to the Oracle University Podcast, the first stop on your cloud journey. During this series of informative podcasts, we’ll bring you foundational training on the most popular Oracle technologies. Let’s get started!

    00:26

    Lois: Hello and welcome to another episode of the Oracle University Podcast. I’m Lois Houston, Director of Innovation Programs with Oracle University, and with me is Nikita Abraham, Principal Technical Editor.

    Nikita: Hi everyone! We’ve spent the last two episodes getting familiar with containerization and the Oracle Cloud Infrastructure Registry. Today, it’s going to be all about Kubernetes. So if you've heard of Kubernetes but you don't know what it is, or you've been playing with Docker and containers and want to know how to take it to the next level, you’ll want to stay with us.

    Lois: That’s right, Niki. We’ll be chatting with Mahendra Mehra, a senior OCI instructor with Oracle University, about the challenges in containerized applications within a complex business setup and how Kubernetes facilitates container orchestration and improves its effectiveness, resulting in better software performance, flexibility, and availability.

    01:20

    Nikita: Hi Mahendra. To start, can you tell us when you would use Kubernetes?

    Mahendra: While deploying and managing microservices in a distributed environment, you may run into issues such as failures or container crashes. Issues such as scheduling containers to specific machines depending upon the configuration. You also might face issues while upgrading or rolling back the applications which you have containerized. Scaling up or scaling down containers across a set of machines can be troublesome.

    01:50

    Lois: And this is where Kubernetes helps automate the entire process?

    Mahendra: Kubernetes is a portable, extensible, open source platform for managing containerized workloads and services that facilitates both declarative configuration and automation.
    You can think of a Kubernetes as you would a conductor for an orchestra. Similar to how a conductor would say how many violins are needed, which one play first, and how loud they should play, Kubernetes would say, how many webserver front-end containers or back-end database containers are needed, what they serve, and how many resources are to be dedicated to each one.

    02:27

    Nikita: That’s so cool! So, how does Kubernetes work?

    Mahendra: In Kubernetes, there is a master node, and there are multiple worker nodes. Each worker node can handle multiple pods. Pods are just a bunch of containers clustered together as a working unit. If a worker node goes down, Kubernetes starts new pods on the functioning worker node.

    02:47

    Lois: So, the benefits of Kubernetes are…

    Mahendra: Kubernetes can containerize applications of any scale without any downtime. Kubernetes can self-heal containerized applications, making them resilient to unexpected failures.

    Kubernetes can autoscale containerized applications as for the workload and ensure optimal utilization of cloud resources. Kubernetes also greatly simplifies the process of deployment operations. With Kubernetes, however complex an operation is, it could be performed reliably by executing a couple of commands at the most.

    03:19

    Nikita: That’s great. Mahendra, can you tell us a bit about the architecture and main components of Kubernetes?

    Mahendra: The Kubernetes cluster has two main components. One is the control plane, and one is the data plane. The control plane hosts the components used to manage the Kubernetes cluster. And the data plane basically hosts all the worker nodes that can be virtual machines or physical machines.

    These worker nodes basically host pods which run one or more containers. The containers running within these pods are making use of Docker images, which are managed within the image registry. In case of OCI, it is the container registry.

    03:54

    Lois: Mahendra, you mentioned nodes and pods. What are nodes?

    Mahendra: It is the smallest unit of computing hardware within the Kubernetes. Its work is to encapsulate one or more applications as containers. A node is a worker machine that has a container runtime environment within it.

    04:10

    Lois: And pods?

    Mahendra: A pod is a basic object of Kubernetes, and it is in charge of encapsulating containers, storage resources, and network IPs. One pod represents one instance of an application within Kubernetes. And these pods are launched in a Kubernetes cluster, which is composed of nodes. This means that a pod runs on a node but can easily be instantiated on another node.

    04:32

    Nikita: Can you run multiple containers within a pod?

    Mahendra: A pod can even contain more than one container if these containers are relatively tightly coupled. Pod is usually meant to run one application container inside of it, but you can run multiple containers inside one pod. Usually, it is only the case if you have one main application container and a helper container or some sidecar containers that has to run inside of that pod.

    Every pod is assigned a unique private IP address, using which the pods can communicate with one another. Pods are meant to be ephemeral, which means they die easily. And if they do, upon re-creation, they are assigned a new private IP address. In fact, Kubernetes can scale a number of these pods to adapt for the incoming traffic, consequently creating or deleting pods on demand.

    Kubernetes guarantees the availability of pods and replicas specified, but not the liveliness of each individual pod. This means that other pods that need to communicate with this application or component cannot rely on the underlying individual pod's IP address.

    05:35

    Lois: So, how does Kubernetes manage traffic to this indecisive number of pods with changing IP addresses?

    Mahendra: This is where another component of Kubernetes called services comes in as a solution. A service gets allocated a virtual IP address and lives until explicitly destroyed. Requests to the services get redirected to the appropriate pods, thus the services of a stable endpoint used for inter-component or application communication.

    And the best part here is that the lifecycle of service and the pods are not connected. So even if the pod dies, the service and the IP address will stay, so you don't have to change their endpoints anymore.

    06:13

    Nikita: What types of services can you create with Kubernetes?

    Mahendra: There are two types of services that you can create. The external service is created to allow external users to connect the containerized applications within the pod. Internal services can also be created that restrict the communication within the cluster. Services can be exposed in different ways by specifying a particular type.

    06:33

    Nikita: And how do you define these services?

    Mahendra: There are three types in which you can define services. The first one is the ClusterIP, which is the default service type that exposes services on an internal IP within the cluster. This type makes the service only reachable from within the cluster.

    You can specify the type of service as NodePort. NodePort basically exposes the service on the same port of each selected node in the cluster using a network address translation and makes the service accessible from the outside of the cluster using the node IP and the NodePort combination. This is basically a superset of ClusterIP.

    You can also go for a LoadBalancer type, which basically creates an external load balancer in the current cloud. OCI supports LoadBalancer types. It also assigns a fixed external IP to the service. And the LoadBalancer type is a superset of NodePort.

    07:25

    Lois: There’s another component called ingress, right? When do you used that?

    Mahendra: An ingress is used when we have multiple services on our cluster, and we want the user requests routed to the services based on their pod, and also, if you want to talk to your application with a secure protocol and a domain name.
    Unlike NodePort or LoadBalancer, ingress is not actually a type of service. Instead, it is an entry point that sits in front of the multiple services within the cluster. It can be defined as a collection of routing rules that govern how external users access services running inside a Kubernetes cluster. Ingress is most useful if you want to expose multiple services under the same IP address, and these services all use the same Layer 7 protocol, typically HTTP.

    08:10

    Lois: Mahendra, what about deployments in Kubernetes?

    Mahendra: A deployment is an object in Kubernetes that lets you manage a set of identical pods. Without a deployment, you will need to create, update, and delete a bunch of pods manually. With the deployment, you declare a single object in a YAML file, and the object is responsible for creating the pods, making sure they stay up-to-date and ensuring there are enough of them running.

    You can also easily autoscale your applications using a Kubernetes deployment. In a nutshell, the Kubernetes deployment object lets you deploy a replica set of your pods, update the pods and the replica sets. It also allows you to roll back to your previous deployment versions. It helps you scale a deployment. It also lets you pause or continue a deployment.

    08:59

    Do you want to stay ahead of the curve in the ever-evolving AI landscape? Look no further than our brand-new OCI Generative AI Professional course and certification. For a limited time only, we’re offering both the course and certification for free! So, don’t miss out on this exclusive opportunity to get certified on Generative AI at no cost. Act fast because this offer is valid only until July 31, 2024. Visit https://education.oracle.com/genai to get started. That’s https://education.oracle.com/genai.

    09:37

    Nikita: Welcome back! We were talking about how useful a Kubernetes deployment is in scaling operations. Mahendra, how do pods communicate with each other?

    Mahendra: Pods communicate with each other using a service. For example, my application has a database endpoint. Let's say it's a MySQL service that it uses to communicate with the database. But where do you configure this database URL or endpoints? Usually, you would do it in the application properties file or as some kind of an external environment variable. But usually, it's inside the build image of the application.

    So for example, if the endpoint of the service or the service name, in this case, changes to something else, you would have to adjust the URL in the application. And this will cause you to rebuild the entire application with a new version, and you will have to push it to the repository. You'll then have to pull that new image into your pod and restart the whole thing. For a small change like database URL, this is a bit tedious.

    So for that purpose, Kubernetes has a component called ConfigMap. ConfigMap is a Kubernetes object that maintains a key value store that can easily be used by other Kubernetes objects, such as pods, deployments, and services. Thus, you can define a ConfigMap composed of all the specific variables for your environment.

    In Kubernetes, now you just need to connect your pod to the ConfigMap, and the pod will read all the new changes that you have specified within the ConfigMap, which means you don't have to go on to build a new image every time a configuration changes.

    11:07

    Lois: So then, I’m just wondering, if we have a ConfigMap to manage all the environment variables and URLs, should we be passing our username and password in the same file?

    Mahendra: The answer is no. Password or other credentials within a ConfigMap in a plain text format would be insecure, even though it's an external configuration. So for this purpose, Kubernetes has another component called secret.
    Kubernetes secrets are secure objects which store sensitive data, such as passwords, OAuth tokens, and SSH keys with the encryption within your cluster. Using secrets gives you more flexibility in a pod lifecycle definition and control over how sensitive data is used. It reduces the risk of exposing the data to unauthorized users.

    11:50

    Nikita: So, you’re saying that the secret is just like ConfigMap or is there a difference?

    Mahendra: Secret is just like ConfigMap, but the difference is that it is used to store secret data credentials, for example, database username and passwords, and it's stored in the base64 encoded format. The kubelet service stores this secret into a temporary file system.

    12:11

    Lois: Mahendra, how does data storage work within Kubernetes?

    Mahendra: So let's say we have this database pod that our application uses, and it has some data or generates some data. What happens when the database container or the pod gets restarted? Ideally, the data would be gone, and that's problematic and inconvenient, obviously, because you want your database data or log data to be persisted reliably for long term.
    To achieve this, Kubernetes has a solution called volumes. A Kubernetes volume basically is a directory that contains data accessible to containers in a given pod within the Kubernetes platform. Volumes provide a plug-in mechanism to connect ephemeral containers with persistent data stores elsewhere.
    The data within a volume will outlast the containers running within the pod. Containers can shut down and restart because they are ephemeral units. Data remains saved in the volume even if a container crashes because a container crash is not enough to cut off a pod from a node.

    13:10

    Nikita: Another main component of Kubernetes is a StatefulSet, right? What can you tell us about it?

    Mahendra: Stateful applications are applications that store data and keep tracking it. All databases such as MySQL, Oracle, and PostgreSQL are examples of Stateful applications. In a modern web application, we see stateless applications connecting with Stateful application to serve the user request.
    For example, a Node.js application is a stateless application that receives new data on each request from the user. This application is then connected with a Stateful application, such as MySQL database, to process the data. MySQL stores the data and keeps updating the database on the user's request.
    Now, assume you deployed a MySQL database in the Kubernetes cluster and scaled this to another replica, and a frontend application wants to access the MySQL cluster to read and write data. The read request will be forwarded to both these pods. However, the write request will only be forwarded to the first primary pod. And the data will be synchronized with other pods. You can achieve this by using the StatefulSets.
    Deleting or scaling down a StatefulSet will not delete the volumes associated with the Stateful applications. This gives you your data safety. If you delete the MySQL pod or if the MySQL pod restarts, you can have access to the data in the same volume.

    So overall, a StatefulSet is a good fit for those applications that require unique network identifiers; stable persistent storage; ordered, graceful deployment and scaling; as well as ordered, automatic rolling updates.

    14:43

    Lois: Before we wrap up, I want to ask you about the features of Kubernetes. I’m sure there are countless, but can you tell us the most important ones?

    Mahendra: Health checks are used to check the container's readiness and liveness status.

    Readiness probes are intended to let Kubernetes know if the app is ready to serve the traffic.

    Networking plays a significant role in container orchestration to isolate independent containers, connect coupled containers, and provide access to containers from the external clients. Service discovery allows containers to discover other containers and establish connections to them.

    Load balancing is a dedicated service that knows which replicas are running and provides an endpoint that is exposed to the clients. Logging allows us to oversee the application behavior.
    The rolling update allows you to update a deployed containerized application with minimal downtime using different update scenarios. The typical way to update such an application is to provide new images for its containers. Containers, in a production environment, can grow from few to many in no time. Kubernetes makes managing multiple containers an easy task.
    And lastly, resource usage monitoring-- resources such as CPU and RAM must be monitored within the Kubernetes environment. Kubernetes resource usage looks at the amount of resources that are utilized by a container or port within the Kubernetes environment. It is very important to keep an eye on the resource usage of the pods and containers as more usage translates to more cost.

    16:18

    Nikita: I think we can wind up our episode with that. Thank you, Mahendra, for joining us today. Kubernetes sure can be challenging to work with, but we covered a lot of ground in this episode.

    Lois: That’s right, Niki! If you want to learn more about the rich features Kubernetes offers, visit mylearn.oracle.com and search for the OCI Container Engine for Kubernetes Specialist course. Remember, all the training is free, so you can dive right in! Join us next week when we'll take a look at the fundamentals of Oracle Cloud Infrastructure Container Engine for Kubernetes. Until then, Lois Houston…

    Nikita: And Nikita Abraham, signing off!

    16:57

    That’s all for this episode of the Oracle University Podcast. If you enjoyed listening, please click Subscribe to get all the latest episodes. We’d also love it if you would take a moment to rate and review us on your podcast app. See you again on the next episode of the Oracle University Podcast.

  • In this episode, hosts Lois Houston and Nikita Abraham, along with senior OCI instructor Mahendra Mehra, discuss how Oracle Cloud Infrastructure Registry simplifies the development-to-production workflow for developers. Listen to Mahendra explain important container registry concepts, such as images, repositories, image tags, and image paths, as well as how they relate to each other. OCI Container Engine for Kubernetes Specialist: https://mylearn.oracle.com/ou/course/oci-container-engine-for-kubernetes-specialist/134971/210836 Oracle University Learning Community: https://education.oracle.com/ou-community LinkedIn: https://www.linkedin.com/showcase/oracle-university/ X (formerly Twitter): https://twitter.com/Oracle_Edu Special thanks to Arijit Ghosh, David Wright, Radhika Banka, and the OU Studio Team for helping us create this episode. -------------------------------------------------------- Episode Transcript:

    00:00

    Welcome to the Oracle University Podcast, the first stop on your cloud journey. During this series of informative podcasts, we’ll bring you foundational training on the most popular Oracle technologies. Let’s get started!

    00:26

    Nikita: Hello and welcome to the Oracle University Podcast. I’m Nikita Abraham, Principal Technical Editor with Oracle University, and I’m joined by Lois Houston, Director of Innovation Programs.

    Lois: Hi there! This is our second episode on OCI Container Engine for Kubernetes, and today we’re going to spend time discussing container registries with our colleague and senior OCI instructor, Mahendra Mehra.

    Nikita: We’ll talk about how you can become proficient in managing Oracle Cloud Infrastructure Registry, a vital component in your container workflow.

    00:58

    Lois: Hi Mahendra, can you explain what Oracle Cloud Infrastructure Registry, or OCIR, is and how it simplifies the container image management process?

    Mahendra: OCIR is an Oracle-managed registry designed to simplify the development-to-production workflow for developers. It offers a range of functionalities, serving as a private docker registry for internal use where developers can easily store, share, and manage container images.

    The strength of OCIR lies in its high available and scalable architecture. Leveraging OCI to ensure reliable deployment of applications, developers can use OCIR not only as a private registry but also as a public registry, facilitating the pulling of images from public repositories for users with internet access.

    01:55

    Lois: But what sets OCIR apart?

    Mahendra: What sets OCIR apart is its compliance with the Open Container Initiative standards, allowing the storage of container images conforming to the OCI specifications.

    It goes a step further by supporting manifest lists, sometimes known as multi-architecture images, accommodating diverse architectures like ARM and AMD64. Additionally, OCIR extends its support to Helm charts. Security is a priority with OCIR, offering private access through a service gateway. This means that OCI resources within a VCN in the same region can securely access OCIR without exposing them to the public internet.

    02:46

    Nikita: OK. What are some other key advantages of OCIR?

    Mahendra: Firstly, OCIR seamlessly integrates with the Container Engine for Kubernetes, ensuring a cohesive container management experience. In terms of security, OCIR provides flexibility by allowing registries to be either private or public, giving administrators control over accessibility.

    It is intricately integrated with IAM, offering straightforward authentication through OCI Identity. Another notable benefit is regional availability. You can efficiently pull container images from the same region as your deployments. For high-performance, availability, and low-latency image operations, OCIR leverages the robust infrastructure of OCI, enhancing the overall reliability of image push and pull operations.

    OCIR ensures anywhere access, allowing you to utilize container CLI for image operations from various locations, be it on the cloud, on-premises, or even from personal laptops.

    03:57

    Lois: I believe OCIR has repository quotas? Is there a cap on them?

    Mahendra: In each enabled region for your tenancy, you can establish up to 500 repositories with a cumulative storage limit of 500 GB. Each repository is capable of holding up to 100,000 images. Importantly, charges apply only for stored images.

    04:21

    Nikita: That’s good to know, Mahendra. I want to move on to basic container registry concepts. Maybe we can start with what an image is.

    Mahendra: Image is basically a read-only template with instructions for creating a container. It holds the application that you want to run as a container, along with any dependencies that are required. Container registry is an Open Container Initiative-compliant registry. As a result, you can store any artifacts that conform to Open Container Initiative specifications, such as Docker images, manifest lists, sometimes also known as multi-architecture images, and Helm charts.

    05:02

    Lois: And what’s a repository then?

    Mahendra: It's a meaningfully named collection of related images which are grouped together for convenience in a container registry. There are different versions of the same source image, which are grouped together into the same repository.

    You can have multiple images stored under this repository. The only thing that you need to keep changing is the image version. Every image version is given a tag. And the tag uniquely identifies the image.

    05:33

    Lois: Is it possible to make the repository public or private?

    Mahendra: Depending upon your need, a repository can be made private or public. One important thing to note is that the user needs to have an OCI username and authentication token before being able to push/pull an image from the OCIR.

    05:52

    Nikita: There are so many terms that you come across when working with repositories and container registry, right? Could you take us through them and explain
    how they relate to each other? I’ve heard of the region key and tenancy namespace.

    Mahendra: The region key identifies the container registry region that you are using. A tenancy namespace is an auto-generated random and immutable string of alphanumeric characters. The tenancy namespace can be retrieved from the value of your object storage namespace field.

    Repository name is the name of a repository in container registry, to and from which you can push and pull images. Repository names can include one or more slash characters and are unique across all the compartments in the entire tenancy.

    You should note that although a repository name can include slash characters, the slash does not represent a hierarchical directory structure. It is simply one character in the string of characters. As a convenience, you might choose to start the name of different repositories with the same string. A registry identifier is the combination of your container registry region key and the tenancy namespace.

    07:07

    Lois: What about an image tag and an image path? How do they differ from each other?

    Mahendra: A tag or an image tag is a string used to refer to a particular image in a known registry. The term "image name" is sometimes used as a shorthand way to refer to a particular image in a particular repository. A tag can be a numerical value or it can be a string. An image path is a fully qualified path to a particular image in a registry. It extends the repository path by adding tags associated with the image.

    07:46

    Do you want to stay ahead of the curve in the ever-evolving AI landscape? Look no further than our brand-new OCI Generative AI Professional course and certification. For a limited time only, we’re offering both the course and certification for free. So, don’t miss out on this exclusive opportunity to get certified on Generative AI at no cost. Act fast because this offer is valid only until July 31, 2024. Visit https://education.oracle.com/genai to get started. That’s https://education.oracle.com/genai.

    08:24

    Nikita: Welcome back! Mahendra, from what you’ve told us, OCIR seems like such a pivotal tool for modern containerized workflows, with its seamless integration, robust security measures, regional accessibility, efficient image management. So, how do we actually manage OCIR?

    Mahendra: Managing OCIR can be done in three ways. Starting with managing the repository itself, followed by managing the images within the repository, and, last but not the least, managing the overall security of your repository alongside the images.

    08:58

    Nikita: Can we dive into each of these approaches in a little more detail? How does managing the repository itself work?

    Mahendra: You can create an empty repository in a compartment and give it a name that's unique across all the compartments in the entire tenancy. There is a limit to the number of repositories you can have in a given region in a tenancy. So, when you no longer need a repository, it makes sense to delete it from the Oracle Cloud Infrastructure registry. Make a note that when you delete a repository, it can take up to 48 hours for the deletion to take effect and for the storage to actually be released.

    When you create a new repository in Oracle Cloud Infrastructure Registry, you specify the compartment in which you want to create it. Having created the repository in one compartment, you can subsequently move it to a different compartment. The reasons can be many. It can be to change the users who are authorized to use the repository or to change how the billing for a repository is charged.

    09:52

    Lois: OK. And what about managing images within the repository?

    Mahendra: You can view the images stored on OCIR using the OCI Console or using Docker images command from your Docker client after logging in to the OCIR repo. To push an image, you first used the Docker tag command to create a copy of the local source image as a new image. As a name for the new image, you specify the fully-qualified path to the target location in your container registry where you want to push the image, including the name of a repository.

    In order to pull an image, you must be logged in into the OCIR registry using the auth token and use the Docker pull command followed by a fully-qualified name of the image you wish to download on your Docker client.

    10:36

    Nikita: What happens when you no longer need an old image or you simply want to clean up the list of image tags in a repository?

    Mahendra: You can delete images from the Oracle Cloud Infrastructure Registry. You can undelete an image you've previously deleted for up to 48 hours after you deleted it. After that time, the image is permanently removed from the container registry. You can set up image retention policies to automatically delete images that meet particular selection criteria.

    11:02

    Lois: What sort of selection criteria?

    Mahendra: Criterias can be images that have not been pulled for a certain number of days or images that have not been tagged for a certain number of days. It can also be images that have not been given particular Docker tags specified as exempt from the automatic deletion.

    There's an hourly process that checks images against the selection criteria, and any that meet the selection criteria are automatically deleted. In each region in a tenancy, there's a global image retention policy. The default criteria of the policy is to retain all images so that no images are automatically deleted. However, you can change the global image retention policy so that the images are deleted if they meet certain criteria that you specify.

    A region's global image retention policy applies to all the repository within that region unless it is explicitly overridden by one or more custom image retention policies. Only one custom image retention policy at a time can be applied to a repository. If a repository has already been added to a custom retention policy and you want to add repository to a different custom retention policy, you have to remove the policy from the first retention policy before adding it to the second one.

    12:15

    Lois: Mahendra, what should we keep in mind when we’re dealing with the global image retention policy?

    Mahendra: The global image retention policy are specific to a particular region. To delete images consistently in different regions in your tenancy, you need to set up image retention policies in each region with identical selection criteria.

    If you want to prevent images from being deleted on the basis of Docker tags they've been given, you need to specify those tags as exempt in a comma-separated list. When you want to clean up the list of images in a repository without actually deleting the images, you can remove the tags from the images in OCIR. Removing images is referred to as untagging.

    12:53

    Nikita: OK…and the last approach was managing the overall security of your repository alongside the images, right?

    Mahendra: While managing security, you are given fine grained control over the operations that users are allowed to perform on repositories within the Container Registry. Using the concept of users and groups, you can control repository access by setting up identity access management policies at the tenancy and at the compartment level.

    You can write policies to allow inspect, read, use, and manage operations on the repository based on the requirements. You can set up Oracle Cloud Infrastructure Registry to scan images in a repository for security vulnerabilities published in the publicly available common vulnerabilities and exposure databases. To perform image scanning, container registry makes use of the Oracle Cloud Infrastructure vulnerability-scanning service and vulnerability scanning REST API.

    13:46

    Nikita: What do I need to have in place before I can push and pull Docker images to and from Oracle Cloud Infrastructure Registry?

    Mahendra: The first thing is, your tenancy must be subscribed to one or more of the regions in which the container registry is available. You can check the same within the Oracle documentation.

    The next thing is, you need to have access to the Docker command line interface to push and pull images on your local machine. The third thing is, users must belong to a group to which a policy grants the appropriate permission or belong to a tenancies administrator group, which by default have access permissions on the container registry. Lastly, user must already have an Oracle Cloud Infrastructure username and an authentication token, which enables them to perform operations on the container registry.

    14:29

    Lois: Thank you, Mahendra, for sharing your insights on OCIR with us. To watch demos on managing OCIR, visit mylearn.oracle.com and search for the OCI Container Engine for Kubernetes Specialist course.

    Nikita: Mahendra will be back next week to walk us through the basics of Kubernetes. Until then, this is Nikita Abraham…
    Lois: And Lois Houston, signing off!

    14:53

    That’s all for this episode of the Oracle University Podcast. If you enjoyed listening, please click Subscribe to get all the latest episodes. We’d also love it if you would take a moment to rate and review us on your podcast app. See you again on the next episode of the Oracle University Podcast.

  • Welcome to a new season of the Oracle University Podcast, where we delve deep into the world of OCI Container Engine for Kubernetes. Join hosts Lois Houston and Nikita Abraham as they ask senior OCI instructor Mahendra Mehra about the transformative power of containers in application deployment and why they're so crucial in today's software ecosystem. Uncover key differences between virtualization and containerization, and gain insights into Docker components and commands. Getting Started with Oracle Cloud Infrastructure: https://oracleuniversitypodcast.libsyn.com/getting-started-with-oracle-cloud-infrastructure-1 Networking in OCI: https://oracleuniversitypodcast.libsyn.com/networking-in-oci OCI Identity and Access Management: https://oracleuniversitypodcast.libsyn.com/oci-identity-and-access-management OCI Container Engine for Kubernetes Specialist: https://mylearn.oracle.com/ou/course/oci-container-engine-for-kubernetes-specialist/134971/210836 Oracle University Learning Community: https://education.oracle.com/ou-community LinkedIn: https://www.linkedin.com/showcase/oracle-university/ X (formerly Twitter): https://twitter.com/Oracle_Edu Special thanks to Arijit Ghosh, David Wright, Radhika Banka, and the OU Studio Team for helping us create this episode. --------------------------------------------------------- Episode Transcript:

    00:00

    Welcome to the Oracle University Podcast, the first stop on your cloud journey. During this series of informative podcasts, we’ll bring you foundational training on the most popular Oracle technologies. Let’s get started!

    00:26

    Lois: Hello and welcome to the Oracle University Podcast! I’m Lois Houston, Director of Innovation Programs with Oracle University, and with me is Nikita Abraham, Principal Technical Editor.

    Nikita: Hi everyone! Welcome to a new season of the Oracle University Podcast. This time around, we’re going to delve into the world of OCI Container Engine for Kubernetes, or OKE. For the next couple of weeks, we’ll cover key aspects of OKE to help you create, manage, and optimize Kubernetes clusters in Oracle Cloud Infrastructure.

    00:58

    Lois: So, whether you’re a cloud native developer, Kubernetes administrator and developer, a DevOps engineer, or site reliability engineer who wants to enhance your expertise in leveraging the OCI OKE service for cloud native application solutions, you’ll want to tune in to these episodes for sure. And if that doesn’t sound like you, I’ll bet you will find the season interesting even if you’re just looking for a deep dive into this service.

    Nikita: That’s right, Lois. In today’s episode, we’ll focus on concepts of containerization, laying the foundation for your journey into the world of containers. And taking us through all this is Mahendra Mehra, a senior OCI instructor with Oracle University.

    01:38

    Lois: Hi Mahendra! We’re so glad to start our look at containerization with you today. Could you give us an overview? Why is it important in today's software world?

    Mahendra: Containerization is a form of virtualization, operates by running applications in isolated user spaces known as containers.

    All these containers share the same underlying operating system. The container engine, pivotal in containerization technologies and container orchestration platforms, serves as the container runtime environment. It effectively manages the creation, deployment, and execution of containers.

    02:18

    Lois: Can you simplify this for a novice like me, maybe by giving us an analogy?

    Mahendra: Imagine a container as a fully packaged and portable computing environment. It's like a digital suitcase that holds everything an application needs to run—binaries, libraries, configuration files, dependencies, you name it. And the best part, it's all encapsulated and isolated within container.

    02:46

    Nikita: Mahendra, how is containerization making our lives easier today?

    Mahendra: In olden days, running an application meant matching it with your machine's operating system. For example, Windows software required a Windows machine. However, containerization has rewritten this narrative. Now, it's ancient history.

    With containerization, you create a single software package, a container that gracefully runs on any device or operating systems. What's fascinating is that these containers seamlessly run while sharing the host operating system.

    The container engine is like a shadow abstracted from the host operating system with limited access to underlying resources. Think of it as a super lightweight virtual machine. The beauty of this, the containerized application becomes a globetrotter, seamlessly running on bare metal within VMs or on the cloud platforms without needing tweaks for each environment.

    03:52

    Nikita: How is containerization different from traditional virtualization?

    Mahendra: On one side, we have traditional virtualization. It’s like having multiple houses on a single piece of land, and each house or virtual machine has its complete setup—wall, roofs, and utilities. This setup, while providing isolation, can be resource-intensive with each virtual machine carrying its entire operating system.

    Now, let's shift gears to containerization, the modern day superhero. Imagine a high-rise building where each floor represents a container. These containers share the same building or host operating system, but have their private space or isolated user space. Here's the magic. They are super lightweight, don't carry extra baggage of a full operating system and can swiftly move between different floors.

    04:50

    Lois: Ok, gotcha. That sounds pretty efficient! So, what are the direct benefits of containerization?

    Mahendra: With containerization technology, there's less overhead during startup and no need to set up a separate guest OS for each application since they all share the same OS kernel. Because of this high efficiency, containerization is commonly used for packing up the many individual microservices that make up modern applications.

    Containerization unfolds a spectrum of benefits, delivering unparalleled portability as containers run uniformly across diverse platforms. This agility, fostered by open source container engines, empowers developers with cross-platform flexibility. The speed of containerized applications known for their lightweight nature reduces cost, boosts efficiency, and accelerates start times.

    Fault isolation ensures robustness, allowing independent operations without affecting others. Efficiency thrives as containers share the OS kernel and reusable layers, optimizing server utilization. The ease of management is achieved through orchestration platforms like Kubernetes automating essential tasks.

    Security remains paramount as container isolation and defined permissions fortify the infrastructure against malicious threats. Containerization emerges not just as a technology but as a transformative force, redefining how we build, deploy, and manage applications in the digital landscape.

    06:37

    Lois: It sure makes deployment efficient, scalability, and seamless! Mahendra, various components of Docker architecture work together to achieve containerization goals, right? Can you walk us through them?

    Mahendra: A developer or a DevOps professional communicates with Docker engine through the Docker client, which may be run on the same computer as Docker engine in case of development environments or through a remote shell.

    So whenever a developer fires a Docker command, the client sends them to the Docker Daemon which carries them out. The communication between the Docker client and the Docker host is usually taken place through REST APIs. The Docker clients can communicate with more than one Daemon at a time.

    Docker Daemon is a persistent background process that manages Docker images, containers, networks, and storage volumes. The Docker Daemon constantly listens to the Docker API request from the Docker clients and processes them.

    Docker registries are services that provide locations from where you can store and download Docker images. In other words, a Docker registry contains repositories that host one or more Docker images. Public registries include Docker Hub and Docker Cloud and private registries can also be used. Oracle Cloud Infrastructure offers you services like OCIR, which is also called a container registry, where you can host your own private or public registry.

    08:02

    Do you want to stay ahead of the curve in the ever-evolving AI landscape? Look no further than our brand-new OCI Generative AI Professional course and certification. For a limited time only, we’re offering both the course and certification for free. So, don’t miss out on this exclusive opportunity to get certified on Generative AI at no cost. Act fast because this offer is valid only until July 31, 2024. Visit https://education.oracle.com/genai to get started. That’s https://education.oracle.com/genai.

    08:39

    Nikita: Welcome back! Mahendra, I’m wondering how virtual machines are different from containers. How do virtual machines work?

    Mahendra: A hypervisor or a virtual machine monitor is a software, firmware, or hardware that creates and runs virtual machines. It is placed between the hardware and the virtual machines, and is necessary to virtualize the server.

    Within each virtual machine runs a unique guest operating system. VMs with different operating systems can run on the same physical server. A Linux VM can sit alongside a Windows VM and so on. Each VM has its own binaries, libraries, and application that it services. And the VM may be many gigabytes in size.

    09:22

    Lois: What kind of benefits do we see from virtual machines?

    Mahendra: This technique provides a variety of benefits like the ability to consolidate applications into a single system, cost savings through reduced footprints, and faster server provisioning.

    But this approach has its own drawbacks. Each VM includes a separate operating system image, which adds overhead in memory and storage footprint. As it turns out, this issue adds complexity to all the stages of software development lifecycle, from development and test to production and disaster recovery as well.

    It also severely limits the portability of applications between different cloud providers and traditional data centers. And this is where containers come to the rescue.

    10:05

    Lois: OK…how do containers help in this situation?

    Mahendra: Containers sit on top of a physical server and its host operating system—typically, Linux or Windows. Each container shares the host OS kernel and usually the binaries and libraries as well. But the shared components are read only.

    Sharing OS resources such as libraries significantly reduces the need to reproduce the operating system code. A server can run multiple workloads with a single operating system installation. Containers are thus exceptionally lightweight. They are only megabytes in size and take just seconds to start.

    What this means in practice is you can put two or three times as many applications on a single server with containers than you can put on a virtual machine. Compared to containers, virtual machines take minutes to run and are order of magnitude larger than an equivalent container measured in gigabytes versus megabytes.

    11:01

    Nikita: So then, is there ever a time you should use a virtual machine?

    Mahendra: You should use a virtual machine when you want to run applications that specifically require a new OS, also when isolation and security are your priority over everything else. In most scenarios, a container will provide a lighter, faster, and more cost-effective solution than the virtual machines.

    11:22

    Lois: Now that we’ve discussed containerization and the different Docker components, can you tell us more about working with Docker images? We first need to know what a Dockerfile is, right?

    Mahendra: A Dockerfile is a text file that defines a Docker image. You’ll use a Dockerfile to create your own custom Docker image. In other words, you use it to define your custom environment to be used in a Docker container.

    You'll want to create your own Dockerfile when existing images won't meet your project needs to different runtime requirements, which means that learning about Docker files is an essential part of working with Docker. Dockerfile is a step-by-step definition of building up a Docker image. It provides a set of standard instructions to be used in Dockerfile that Docker will execute when you issue a Docker build command.

    12:09

    Nikita: Before we wrap up, can you walk us through some Docker commands?

    Mahendra: Every Dockerfile must start with a FROM instruction. The idea behind this is that you need a starting point to build your image. It can be from scratch or from an existing image available in the Docker registry.

    The RUN command is used to execute a command and will wait till the command finishes its execution. Since most of the images are Linux-based, a good practice is to set up a directory you will work in. That's the purpose of work directory line. It defines a directory and moves you in. The COPY instruction helps you to copy your source code into the image. ENV provides default values for variables that can be accessed within the containers. If your app needs to be reached from outside the container, you must open its listening port using the EXPOSE command.

    Once your application is ready to run, the last thing to do is to specify how to execute it. You must add the CMD line with the same command with all the arguments you used locally to launch your application. This command can also be used to execute commands at runtime for the containers, but we can be more flexible using the ENTRYPOINT command. Labels are used in Dockerfile to help organize your Docker images.

    13:20

    Lois: Thank you, Mahendra, for joining us today. I learned a lot! And if you want to learn more about working with Docker images, go to mylearn.oracle.com and search for the OCI Container Engine for Kubernetes Specialist course. The course is free so you can get started right away.

    Nikita: Yeah, a fundamental understanding of core OCI services, like Identity and Access Management, networking, compute, storage, and security, is a prerequisite to the course and will certainly serve you well when leveraging the OCI OKE service. And the quickest way to gain this knowledge is by completing the OCI Foundations Associate learning path on MyLearn and getting certified. You can also listen to episodes from our first season, called OCI Made Easy, where we discussed these topics. We’ll put a few links in the show notes so you can easily find them.

    Lois: We’re looking forward to having Mahendra join us again next week when we’ll talk about container registries. Until next time, this is Lois Houston…

    Nikita: And Nikita Abraham signing off!

    14:24

    That’s all for this episode of the Oracle University Podcast. If you enjoyed listening, please click Subscribe to get all the latest episodes. We’d also love it if you would take a moment to rate and review us on your podcast app. See you again on the next episode of the Oracle University Podcast.

  • Listen to Lois Houston and Nikita Abraham, along with Senior Principal Product Manager Wes Prichard, as they explore the five core components of OCI AI services: language, speech, vision, document understanding, and anomaly detection, to help you make better sense of all that unstructured data around you. Oracle MyLearn: https://mylearn.oracle.com/ou/learning-path/become-an-oci-ai-foundations-associate-2023/127177 Oracle University Learning Community: https://education.oracle.com/ou-community LinkedIn: https://www.linkedin.com/showcase/oracle-university/ X (formerly Twitter): https://twitter.com/Oracle_Edu Special thanks to Arijit Ghosh, David Wright, Himanshu Raj, and the OU Studio Team for helping us create this episode. -------------------------------------------------------- Episode Transcript:

    00:00

    The world of artificial intelligence is vast and everchanging. And with all the buzz around it lately, we figured it was the perfect time to revisit our AI Made Easy series. Join us over the next few weeks as we chat about all things AI, helping you to discover its endless possibilities. Ready to dive in? Let’s go!

    00:33

    Welcome to the Oracle University Podcast, the first stop on your cloud journey. During this series of informative podcasts, we’ll bring you foundational training on the most popular

    Oracle technologies. Let’s get started!

    00:46

    Nikita: Welcome to the Oracle University Podcast! I’m Nikita Abraham, Principal Technical Editor with Oracle University, and with me is Lois Houston, Director of Innovation Programs.

    Lois: Hi there! In our last episode, we spoke about OCI AI Portfolio, including AI and ML services, and the OCI AI infrastructure.

    Nikita: Yeah, and in today’s episode, we’re going to continue down a similar path and take a closer look at OCI AI services.

    01:16

    Lois: With us today is Senior Principal Product Manager, Wes Prichard. Hi Wes! It’s lovely to have you here with us. Hemant gave us a broad overview of the various OCI AI services last week, but we’re really hoping to get into each of them with you. So, let’s jump right in and start with the OCI Language service. What can you tell us about it?

    Wes: OCI Language analyzes unstructured text for you. It provides models trained on industry data to perform language analysis with no data science experience needed.

    01:48

    Nikita: What kind of big things can it do?

    Wes: It has five main capabilities. First, it detects the language of the text. It recognizes 75 languages, from Afrikaans to Welsh.
    It identifies entities, things like names, places, dates, emails, currency, organizations, phone numbers--14 types in all. It identifies the sentiment of the text, and not just one sentiment for the entire block of text, but the different sentiments for different aspects.

    02:17

    Nikita: What do you mean by that, Wes?

    Wes: So let's say you read a restaurant review that said, the food was great, but the service sucked. You'll get food with a positive sentiment and service with a negative sentiment. And it also analyzes the sentiment for every sentence.

    Lois: Ah, that’s smart. Ok, so we covered three capabilities. What else?

    Wes: It identifies key phrases in the text that represent the important ideas or subjects. And it classifies the general topic of the text from a list of 600 categories and subcategories.

    02:48

    Lois: Ok, and then there’s the OCI Speech service...

    Wes: OCI Speech is very straightforward. It locks the data in audio tracks by converting speech to text. Developers can use Oracle's time-tested acoustic language models to provide highly accurate transcription for audio or video files across multiple languages.

    OCI Speech automatically transcribes audio and video files into text using advanced deep learning techniques. There's no data science experience required. It processes data directly in object storage. And it generates timestamped, grammatically accurate transcriptions.

    03:22

    Nikita: What are some of the main features of OCI Speech?

    Wes: OCI Speech supports multiple languages, specifically English, Spanish, and Portuguese, with more coming in the future. It has batching support where multiple files can be submitted with a single call. It has blazing fast processing. It can transcribe hours of audio in less than 10 minutes. It does this by chunking up your audio into smaller segments, and transcribing each segment, and then joining them all back together into a single file. It provides a confidence score, both per word and per transcription. It punctuates transcriptions to make the text more readable and to allow downstream systems to process the text with less friction.

    And it has SRT file support.

    04:06

    Lois: SRT? What’s that?

    Wes: SRT is the most popular closed caption output file format. And with this SRT support, users can add closed captions to their video. OCI Speech makes transcribed text more readable to resemble how humans write. This is called normalization. And the service will normalize things like addresses, times, numbers, URLs, and more.

    It also does profanity filtering, where it can either remove, mask, or tag profanity and output text, where removing replaces the word with asterisks, and masking does the same thing, but it retains the first letter, and tagging will leave the word in place, but it provides tagging in the output data.

    04:49

    Nikita: And what about OCI Vision? What are its capabilities?

    Wes: Vision is a computed vision service that works on images, and it provides two main capabilities-- image analysis and document AI. Image analysis analyzes photographic images. Object detection is the feature that detects objects inside an image using a bounding box and assigning a label to each object with an accuracy percentage. Object detection also locates and extracts text that appears in the scene, like on a sign.

    Image classification will assign classification labels to the image by identifying the major features in the scene. One of the most powerful capabilities of image analysis is that, in addition to pretrained models, users can retrain the models with their own unique data to fit their specific needs.

    05:40

    Lois: So object detection and image classification are features of image analysis. I think I got it! So then what’s document AI?

    Wes: It's used for working with document images. You can use it to understand PDFs or document image types, like JPEG, PNG, and Tiff, or photographs containing textual information.

    06:01

    Lois: And what are its most important features?

    Wes: The features of document AI are text recognition, also known as OCR or optical character recognition.

    And this extracts text from images, including non-trivial scenarios, like handwritten texts, plus tilted, shaded, or rotated documents. Document classification classifies documents into 10 different types based on visual appearance, high-level features, and extracted keywords. This is useful when you need to process a document, based on its classification, like an invoice, a receipt, or a resume.

    Language detection analyzes the visual features of text to determine the language rather than relying on the text itself. Table extraction identifies tables in docs and extracts their content in tabular form. Key value extraction finds values for 13 common fields and line items in receipts, things like merchant name and transaction date.

    07:02

    Want to get the inside scoop on Oracle University? Head over to the Oracle University Learning Community. Attend exclusive events. Read up on the latest news. Get first-hand access to new products. Read the OU Learning Blog. Participate in Challenges. And stay up-to-date with upcoming certification opportunities.

    Visit mylearn.oracle.com to get started.

    07:27

    Nikita: Welcome back! Wes, I want to ask you about OCI Anomaly Detection. We discussed it a bit last week and it seems like such an intelligent and efficient service.

    Wes: Oracle Cloud Infrastructure Anomaly Detection identifies anomalies in time series data. Equipment sensors generate time series data, but all kinds of business metrics are also time-based. The unique feature of this service is that it finds anomalies, not just in a single signal, but across many signals at once. That's important because machines often generate multiple signals at once and the signals are often related.

    08:03

    Nikita: Ok you need to give us an example of this!

    Wes: Think of a pump that has an output pressure, a flow rate, an RPM, and an electrical current draw. When a pump's going to fail, anomalies may appear across several of those signals but at different times. OCI Anomaly Detection helps you to identify anomalies in a multivariate data set by taking advantage of the interrelationship among signals.

    The service contains algorithms for both multi-signal, as in multivariate, single signal, as in univariate anomaly detection, and it automatically determines which algorithm to use based on the training data provided. The multivariate algorithm is called MSET-2, which stands for Multivariate State Estimation technique, and it's unique to Oracle.

    08:49

    Lois: And the 2?

    Wes: The 2 in the name refers to the patented enhancements by Oracle labs that automatically identify and fix data quality issues resulting in fewer false alarms and more accurate results.
    Now unlike some of the other AI services, OCI Anomaly

    Detection is always trained on the customer's data. It's trained using actual historical data with no anomalies, and there can be as many different trained models as needed for different sets of signals.

    09:18

    Nikita: So where would one use a service like this?

    Wes: One of the most obvious applications of this service is for predictive maintenance. Early warning of a problem provides the opportunity to deploy maintenance resources and schedule downtime to minimize disruption to the business.

    09:33

    Lois: How would you train an OCI Anomaly Detection model?
    Wes: It's a simple four-step process to prepare a model that can be used for anomaly detection. The first step is to obtain training data from the system to be monitored. The data must contain no anomalies and should cover the normal range of values that would be experienced in a full business cycle.
    Second, the training data file is uploaded to an object storage bucket.

    Third, a data set is created for the training data. So a data set in this context is an object in the OCI Anomaly Detection service to manage data used for training and testing models.

    And fourth, the model is trained. A wizard in the user interface steps the user through the required inputs, such as the training data set and some training parameters like the target false alarm probability.

    10:23

    Lois: How would this service know about the data and whether the trained model is univariate or multivariate?

    Wes: When training OCI Anomaly Detection models, the user does not need to specify whether the intended model is for multivariate or univariate data. It does this detection automatically.

    For example, if a model is trained with 10 signals and 5 of those signals are determined to be correlated enough for multivariate anomaly detection, it will create an internal multivariate model for those signals. If the other five signals are not correlated with each other, it will create an internal univariate model for each one.

    From the user's perspective, the result will be a single OCI anomaly detection model for the 10 signals. But internally, the signals are treated differently based on the training. A user can also train a model on a single signal and it will result in a univariate model.

    11:16

    Lois: What does this OCI Anomaly Detection model training entail? How does it ensure that it does not have any false alarms?

    Wes: Training a model requires a single data file with no anomalies that should cover a complete business cycle, which means it should represent all the normal variations in the signal. During training, OCI Anomaly Detection will use a portion of the data for training and another portion for automated testing. The fraction used for each is specified when the model is trained.
    When model training is complete, it's best practice to do another test of the model with a data set containing anomalies to see if the anomalies are detected and if there are any false alarms. Based on the outcome, the user may want to retrain the model and specify a different false alarm probability, also called F-A-P or FAP. The FAP is the probability that the model would produce a false alarm. The false alarm probability can be thought of as the sensitivity of the model. The lower the false alarm probability, the less likelihood of it reporting a false alarm, but the less sensitive it will be to detecting anomalies. Selecting the right FAP is a business decision based on the need for sensitive detections balanced by the ability to tolerate false alarms.

    Once a model has been trained and the user is satisfied with its detection performance, it can then be used for inferencing.

    12:44

    Nikita: Inferencing? Is that what I think it is?
    Wes: New data is submitted to the model and OCI Anomaly Detection will respond with anomalies that are detected. The input data must contain the same signals that the model was trained on. So, for example, if the model was trained on signals A, B, C, and D, then for detection inferencing, the same four signals must be provided. No more, no less.

    13:07

    Lois: Where can I find the features of OCI Anomaly Detection that you mentioned?

    Wes: The training and inferencing features of OCI Anomaly Detection can be accessed through the OCI console. However, a human-driven interface is not efficient for most business scenarios.

    In most cases, automating the detection of anomalies through software is preferred to be able to process hundreds or thousands of signals using many trained models. The service provides multiple software interfaces for this purpose.
    Each trained model is accessible through a REST API and an HTTP endpoint. Additionally, programming language-specific SDKs are available for multiple languages, including Python. Using the Python SDK, data scientists can work with OCI Anomaly Detection for both training and inferencing in an OCI Data Science notebook.

    13:58

    Nikita: How can a data scientist take advantage of these capabilities?

    Wes: Well, you can write code against the REST API or use any of the various language SDKs. But for data scientists working in OCI Data Science, it makes sense to use Python.

    14:12

    Lois: That’s exciting! What does it take to use the Python SDK in a notebook… to be able to use the AI services?

    Wes: You can use a Notebook session in OCI Data Science to invoke the SDK for any of the AI services.

    This might be useful to generate new features for a custom model or simply as a way to consume the service using a familiar Python interface. But before you can invoke the SDK, you have to prepare the data science notebook session by supplying it with an API Signing Key.

    Signing Key is unique to a particular user and tenancy and authenticates that user to OCI when invoking the SDK. So therefore, you want to make sure you safeguard your Signing Key and never share it with another user.

    14:55

    Nikita: And where would I get my API Signing Key?

    Wes: You can obtain an API Signing Key from your user profile in the OCI Console. Then you save that key as a file to your local machine.

    The API Signing Key also provides commands to be added to a config file that the SDK expects to find in the environment, where the SDK code is executing. The config file then references the key file. Once these files are prepared on your local machine, you can upload them to the Notebook session, where you will execute SDK code for the AI service.

    The API Signing Key and config file can be reused with any of your notebook sessions, and the same files also work for all of the AI services. So, the files only need to be created once for each user and tenancy combination.

    15:48

    Lois: Thank you so much, Wes, for this really insightful discussion. To learn more about the topics covered today, you can visit mylearn.oracle.com and search for the Oracle Cloud Infrastructure AI Foundations course.

    Nikita: And remember, that course prepares you for the Oracle Cloud Infrastructure AI Foundations Associate certification that you can take for free! So, don’t wait too long to check it out. Join us next week for another episode of the Oracle University Podcast. Until then, this is Nikita Abraham…

    Lois Houston: And Lois Houston, signing off!

    16:23

    That’s all for this episode of the Oracle University Podcast. If you enjoyed listening, please click Subscribe to get all the latest episodes. We’d also love it if you would take a moment to rate
    and review us on your podcast app. See you again on the next episode of the Oracle University Podcast.

  • Oracle has been actively focusing on bringing AI to the enterprise at every layer of its tech stack, be it SaaS apps, AI services, infrastructure, or data. In this episode, hosts Lois Houston and Nikita Abraham, along with senior instructors Hemant Gahankari and Himanshu Raj, discuss OCI AI and Machine Learning services. They also go over some key OCI Data Science concepts and responsible AI principles. Oracle MyLearn: https://mylearn.oracle.com/ou/learning-path/become-an-oci-ai-foundations-associate-2023/127177 Oracle University Learning Community: https://education.oracle.com/ou-community LinkedIn: https://www.linkedin.com/showcase/oracle-university/ X (formerly Twitter): https://twitter.com/Oracle_Edu Special thanks to Arijit Ghosh, David Wright, Himanshu Raj, and the OU Studio Team for helping us create this episode. -------------------------------------------------------- Episode Transcript:

    00:00

    The world of artificial intelligence is vast and everchanging. And with all the buzz around it lately, we figured it was the perfect time to revisit our AI Made Easy series. Join us over the next few weeks as we chat about all things AI, helping you to discover its endless possibilities. Ready to dive in? Let’s go!

    00:33

    Welcome to the Oracle University Podcast, the first stop on your cloud journey. During this series of informative podcasts, we’ll bring you foundational training on the most popular Oracle technologies. Let’s get started!

    00:46

    Lois: Welcome to the Oracle University Podcast! I’m Lois Houston, Director of Innovation Programs with Oracle University, and with me is Nikita Abraham, Principal Technical Editor.

    Nikita: Hey everyone! In our last episode, we dove into Generative AI and Language Learning Models.

    Lois: Yeah, that was an interesting one. But today, we’re going to discuss the AI and machine learning services offered by Oracle Cloud Infrastructure, and we’ll look at the OCI AI infrastructure.

    Nikita: I’m also going to try and squeeze in a couple of questions on a topic I’m really keen about, which is responsible AI. To take us through all of this, we have two of our colleagues, Hemant Gahankari and Himanshu Raj. Hemant is a Senior Principal OCI Instructor and Himanshu is a Senior Instructor on AI/ML. So, let’s get started!

    01:36

    Lois: Hi Hemant! We’re so excited to have you here! We know that Oracle has really been focusing on bringing AI to the enterprise at every layer of our stack.

    Hemant: It all begins with data and infrastructure layers. OCI AI services consume data, and AI services, in turn, are consumed by applications.

    This approach involves extensive investment from infrastructure to SaaS applications. Generative AI and massive scale models are the more recent steps. Oracle AI is the portfolio of cloud services for helping organizations use the data they may have for the business-specific uses.

    Business applications consume AI and ML services. The foundation of AI services and ML services is data. AI services contain pre-built models for specific uses. Some of the AI services are pre-trained, and some can be additionally trained by the customer with their own data.

    AI services can be consumed by calling the API for the service, passing in the data to be processed, and the service returns a result. There is no infrastructure to be managed for using AI services.

    02:58

    Nikita: How do I access OCI AI services?

    Hemant: OCI AI services provide multiple methods for access. The most common method is the OCI Console. The OCI Console provides an easy to use, browser-based interface that enables access to notebook sessions and all the features of all the data science, as well as AI services.

    The REST API provides access to service functionality but requires programming expertise. And API reference is provided in the product documentation. OCI also provides programming language SDKs for Java, Python, TypeScript, JavaScript, .Net, Go, and Ruby. The command line interface provides both quick access and full functionality without the need for scripting.

    03:52

    Lois: Hemant, what are the types of OCI AI services that are available?

    Hemant: OCI AI services is a collection of services with pre-built machine learning models that make it easier for developers to build a variety of business applications. The models can also be custom trained for more accurate business results. The different services provided are digital assistant, language, vision, speech, document understanding, anomaly detection.

    04:24

    Lois: I know we’re going to talk about them in more detail in the next episode, but can you introduce us to OCI Language, Vision, and Speech?

    Hemant: OCI Language allows you to perform sophisticated text analysis at scale. Using the pre-trained and custom models, you can process unstructured text to extract insights without data science expertise. Pre-trained models include language detection, sentiment analysis, key phrase extraction, text classification, named entity recognition, and personal identifiable information detection.

    Custom models can be trained for named entity recognition and text classification with domain-specific data sets. In text translation, natural machine translation is used to translate text across numerous languages.

    Using OCI Vision, you can upload images to detect and classify objects in them. Pre-trained models and custom models are supported. In image analysis, pre-trained models perform object detection, image classification, and optical character recognition. In image analysis, custom models can perform custom object detection by detecting the location of custom objects in an image and providing a bounding box.
    The OCI Speech service is used to convert media files to readable texts that's stored in JSON and SRT format. Speech enables you to easily convert media files containing human speech into highly exact text transcriptions.

    06:12

    Nikita: That’s great. And what about document understanding and anomaly detection?

    Hemant: Using OCI document understanding, you can upload documents to detect and classify text and objects in them. You can process individual files or batches of documents. In OCR, document understanding can detect and recognize text in a document. In text extraction, document understanding provides the word level and line level text, and the bounding box, coordinates of where the text is found.

    In key value extraction, document understanding extracts a predefined list of key value pairs of information from receipts, invoices, passports, and driver IDs. In table extraction, document understanding extracts content in tabular format, maintaining the row and column relationship of cells. In document classification, the document understanding classifies documents into different types.

    The OCI Anomaly Detection service is a service that analyzes large volume of multivariate or univariate time series data. The Anomaly Detection service increases the reliability of businesses by monitoring their critical assets and detecting anomalies early with high precision. Anomaly Detection is the identification of rare items, events, or observations in data that differ significantly from the expectation.

    07:55

    Nikita: Where is Anomaly Detection most useful?

    Hemant: The Anomaly Detection service is designed to help with analyzing large amounts of data and identifying the anomalies at the earliest possible time with maximum accuracy. Different sectors, such as utility, oil and gas, transportation, manufacturing, telecommunications, banking, and insurance use Anomaly Detection service for their day-to-day activities.

    08:23

    Lois: Ok…and the first OCI AI service you mentioned was digital assistant…

    Hemant: Oracle Digital Assistant is a platform that allows you to create and deploy digital assistants, which are AI driven interfaces that help users accomplish a variety of tasks with natural language conversations. When a user engages with the Digital Assistant, the Digital Assistant evaluates the user input and routes the conversation to and from the appropriate skills.
    Digital Assistant greets the user upon access. Upon user requests, list what it can do and provide entry points into the given skills. It routes explicit user requests to the appropriate skills. And it also handles interruptions to flows and disambiguation. It also handles requests to exit the bot.

    09:21

    Nikita: Excellent! Let’s bring Himanshu in to tell us about machine learning services. Hi Himanshu! Let’s talk about OCI Data Science. Can you tell us a bit about it?

    Himanshu: OCI Data Science is the cloud service focused on serving the data scientist throughout the full machine learning life cycle with support for Python and open source.

    The service has many features, such as model catalog, projects, JupyterLab notebook, model deployment, model training, management, model explanation, open source libraries, and AutoML.

    09:56

    Lois: Himanshu, what are the core principles of OCI Data Science?

    Himanshu: There are three core principles of OCI Data Science. The first one, accelerated. The first principle is about accelerating the work of the individual data scientist. OCI Data Science provides data scientists with open source libraries along with easy access to a range of compute power without having to manage any infrastructure. It also includes Oracle's own library to help streamline many aspects of their work.
    The second principle is collaborative. It goes beyond an individual data scientist’s productivity to enable data science teams to work together. This is done through the sharing of assets, reducing duplicative work, and putting reproducibility and auditability of models for collaboration and risk management.

    Third is enterprise grade. That means it's integrated with all the OCI Security and access protocols. The underlying infrastructure is fully managed. The customer does not have to think about provisioning compute and storage. And the service handles all the maintenance, patching, and upgrades so user can focus on solving business problems with data science.

    11:11

    Nikita: Let’s drill down into the specifics of OCI Data Science. So far, we know it’s cloud service to rapidly build, train, deploy, and manage machine learning models. But who can use it? Where is it? And how is it used?

    Himanshu: It serves data scientists and data science teams throughout the full machine learning life cycle.

    Users work in a familiar JupyterLab notebook interface, where they write Python code. And how it is used? So users preserve their models in the model catalog and deploy their models to a managed infrastructure.

    11:46

    Lois: Walk us through some of the key terminology that’s used.

    Himanshu: Some of the important product terminology of OCI Data Science are projects. The projects are containers that enable data science teams to organize their work. They represent collaborative work spaces for organizing and documenting data science assets, such as notebook sessions and models.

    Note that tenancy can have as many projects as needed without limits. Now, this notebook session is where the data scientists work. Notebook sessions provide a JupyterLab environment with pre-installed open source libraries and the ability to add others. Notebook sessions are interactive coding environment for building and training models.

    Notebook sessions run in a managed infrastructure and the user can select CPU or GPU, the compute shape, and amount of storage without having to do any manual provisioning. The other important feature is Conda environment. It's an open source environment and package management system and was created for Python programs.

    12:53

    Nikita: What is a Conda environment used for?

    Himanshu: It is used in the service to quickly install, run, and update packages and their dependencies. Conda easily creates, saves, loads, and switches between environments in your notebooks sessions.

    13:07

    Nikita: Earlier, you spoke about the support for Python in OCI Data Science. Is there a dedicated library?

    Himanshu: Oracle's Accelerated Data Science ADS SDK is a Python library that is included as part of OCI Data Science.
    ADS has many functions and objects that automate or simplify the steps in the data science workflow, including connecting to data, exploring, and visualizing data. Training a model with AutoML, evaluating models, and explaining models. In addition, ADS provides a simple interface to access the data science service mode model catalog and other OCI services, including object storage.

    13:45

    Lois: I also hear a lot about models. What are models?

    Himanshu: Models define a mathematical representation of your data and business process. You create models in notebooks, sessions, inside projects.

    13:57

    Lois: What are some other important terminologies related to models?

    Himanshu: The next terminology is model catalog. The model catalog is a place to store, track, share, and manage models.
    The model catalog is a centralized and managed repository of model artifacts. A stored model includes metadata about the provenance of the model, including Git-related information and the script. Our notebook used to push the model to the catalog. Models stored in the model catalog can be shared across members of a team, and they can be loaded back into a notebook session.

    The next one is model deployments. Model deployments allow you to deploy models stored in the model catalog as HTTP endpoints on managed infrastructure.

    14:45

    Lois: So, how do you operationalize these models?

    Himanshu: Deploying machine learning models as web applications, HTTP API endpoints, serving predictions in real time is the most common way to operationalize models. HTTP endpoints or the API endpoints are flexible and can serve requests for the model predictions. Data science jobs enable you to define and run a repeatable machine learning tasks on fully managed infrastructure.

    Nikita: Thanks for that, Himanshu.

    15:18

    Did you know that Oracle University offers free courses on Oracle Cloud Infrastructure? You’ll find training on everything from cloud computing, database, and security, artificial intelligence, and machine learning, all free to subscribers. So, what are you waiting for? Pick a topic, leverage the Oracle University Learning Community to ask questions, and then sit for your certification.

    Visit mylearn.oracle.com to get started.

    15:46

    Nikita: Welcome back! The Oracle AI Stack consists of AI services and machine learning services, and these services are built using AI infrastructure. So, let’s move on to that. Hemant, what are the components of OCI AI Infrastructure?

    Hemant: OCI AI Infrastructure is mainly composed of GPU-based instances. Instances can be virtual machines or bare metal machines. High performance cluster networking that allows instances to communicate to each other. Super clusters are a massive network of GPU instances with multiple petabytes per second of bandwidth. And a variety of fully managed storage options from a single byte to exabytes without upfront provisioning are also available.

    16:35

    Lois: Can we explore each of these components a little more? First, tell us, why do we need GPUs?

    Hemant: ML and AI needs lots of repetitive computations to be made on huge amounts of data. Parallel computing on GPUs is designed for many processes at the same time. A GPU is a piece of hardware that is incredibly good in performing computations.
    GPU has thousands of lightweight cores, all working on their share of data in parallel. This gives them the ability to crunch through extremely large data set at tremendous speed.

    17:14

    Nikita: And what are the GPU instances offered by OCI?

    Hemant: GPU instances are ideally suited for model training and inference. Bare metal and virtual machine compute instances powered by NVIDIA GPUs H100, A100, A10, and V100 are made available by OCI.

    17:35

    Nikita: So how do we choose what to train from these different GPU options?

    Hemant: For large scale AI training, data analytics, and high performance computing, bare metal instances BM 8 X NVIDIA H100 and BM 8 X NVIDIA A100 can be used.

    These provide up to nine times faster AI training and 30 times higher acceleration for AI inferencing. The other bare metal and virtual machines are used for small AI training, inference, streaming, gaming, and virtual desktop infrastructure.

    18:14

    Lois: And why would someone choose the OCI AI stack over its counterparts?

    Hemant: Oracle offers all the features and is the most cost effective option when compared to its counterparts.

    For example, BM GPU 4.8 version 2 instance costs just $4 per hour and is used by many customers.

    Superclusters are a massive network with multiple petabytes per second of bandwidth. It can scale up to 4,096 OCI bare metal instances with 32,768 GPUs.

    We also have a choice of bare metal A100 or H100 GPU instances, and we can select a variety of storage options, like object store, or block store, or even file system. For networking speeds, we can reach 1,600 GB per second with A100 GPUs and 3,200 GB per second with H100 GPUs.

    With OCI storage, we can select local SSD up to four NVMe drives, block storage up to 32 terabytes per volume, object storage up to 10 terabytes per object, file systems up to eight exabyte per file system. OCI File system employs five replicated storage located in different fault domains to provide redundancy for resilient data protection.

    HPC file systems, such as BeeGFS and many others are also offered. OCI HPC file systems are available on Oracle Cloud Marketplace and make it easy to deploy a variety of high performance file servers.

    20:11

    Lois: I think a discussion on AI would be incomplete if we don’t talk about responsible AI. We’re using AI more and more every day, but can we actually trust it?

    Hemant: For us to trust AI, it must be driven by ethics that guide us as well.

    Nikita: And do we have some principles that guide the use of AI?

    Hemant: AI should be lawful, complying with all applicable laws and regulations. AI should be ethical, that is it should ensure adherence to ethical principles and values that we uphold as humans. And AI should be robust, both from a technical and social perspective. Because even with the good intentions, AI systems can cause unintentional harm.

    AI systems do not operate in a lawless world. A number of legally binding rules at national and international level apply or are relevant to the development, deployment, and use of AI systems today. The law not only prohibits certain actions but also enables others, like protecting rights of minorities or protecting environment. Besides horizontally applicable rules, various domain-specific rules exist that apply to particular AI applications. For instance, the medical device regulation in the health care sector.

    In AI context, equality entails that the systems’ operations cannot generate unfairly biased outputs. And while we adopt AI, citizens right should also be protected.

    21:50

    Lois: Ok, but how do we derive AI ethics from these?

    Hemant: There are three main principles.
    AI should be used to help humans and allow for oversight. It should never cause physical or social harm. Decisions taken by AI should be transparent and fair, and also should be explainable. AI that follows the AI ethical principles is responsible AI.

    So if we map the AI ethical principles to responsible AI requirements, these will be like, AI systems should follow human-centric design principles and leave meaningful opportunity for human choice. This means securing human oversight. AI systems and environments in which they operate must be safe and secure, they must be technically robust, and should not be open to malicious use.

    The development, and deployment, and use of AI systems must be fair, ensuring equal and just distribution of both benefits and costs. AI should be free from unfair bias and discrimination. Decisions taken by AI to the extent possible should be explainable to those directly and indirectly affected.

    23:21

    Nikita: This is all great, but what does a typical responsible AI implementation process look like?

    Hemant: First, a governance needs to be put in place. Second, develop a set of policies and procedures to be followed. And once implemented, ensure compliance by regular monitoring and evaluation.

    Lois: And this is all managed by developers?

    Hemant: Typical roles that are involved in the implementation cycles are developers, deployers, and end users of the AI.

    23:56

    Nikita: Can we talk about AI specifically in health care? How do we ensure that there is fairness and no bias?

    Hemant: AI systems are only as good as the data that they are trained on. If that data is predominantly from one gender or racial group, the AI systems might not perform as well on data from other groups.

    24:21

    Lois: Yeah, and there’s also the issue of ensuring transparency, right?

    Hemant: AI systems often make decisions based on complex algorithms that are difficult for humans to understand. As a result, patients and health care providers can have difficulty trusting the decisions made by the AI. AI systems must be regularly evaluated to ensure that they are performing as intended and not causing harm to patients.

    24:49

    Nikita: Thank you, Hemant and Himanshu, for this really insightful session. If you’re interested in learning more about the topics we discussed today, head on over to mylearn.oracle.com and search for the Oracle Cloud Infrastructure AI Foundations course.

    Lois: That’s right, Niki. You’ll find demos that you watch as well as skill checks that you can attempt to better your understanding. In our next episode, we’ll get into the OCI AI Services we discussed today and talk about them in more detail. Until then, this is Lois Houston…

    Nikita: And Nikita Abraham, signing off!

    25:25

    That’s all for this episode of the Oracle University Podcast. If you enjoyed listening, please click Subscribe to get all the latest episodes. We’d also love it if you would take a moment to rate and review us on your podcast app. See you again on the next episode of the Oracle University Podcast.

  • In this week’s episode, Lois Houston and Nikita Abraham, along with Senior Instructor Himanshu Raj, take you through the extraordinary capabilities of Generative AI, a subset of deep learning that doesn’t make predictions but rather creates its own content. They also explore the workings of Large Language Models. Oracle MyLearn: https://mylearn.oracle.com/ou/learning-path/become-an-oci-ai-foundations-associate-2023/127177 Oracle University Learning Community: https://education.oracle.com/ou-community LinkedIn: https://www.linkedin.com/showcase/oracle-university/ X (formerly Twitter): https://twitter.com/Oracle_Edu Special thanks to Arijit Ghosh, David Wright, and the OU Studio Team for helping us create this episode. --------------------------------------------------------- Episode Transcript:

    00:00

    The world of artificial intelligence is vast and everchanging. And with all the buzz around it lately, we figured it was the perfect time to revisit our AI Made Easy series. Join us over the next few weeks as we chat about all things AI, helping you to discover its endless possibilities. Ready to dive in? Let’s go!

    00:33

    Welcome to the Oracle University Podcast, the first stop on your cloud journey. During this series of informative podcasts, we’ll bring you foundational training on the most popular
    Oracle technologies. Let’s get started!

    00:46

    Lois: Hello and welcome to the Oracle University Podcast. I’m Lois Houston, Director of Innovation Programs with Oracle University, and with me is Nikita Abraham, Principal Technical Editor.

    Nikita: Hi everyone! In our last episode, we went over the basics of deep learning. Today, we’ll look at generative AI and large language models, and discuss how they work. To help us with that, we have Himanshu Raj, Senior Instructor on AI/ML. So, let’s jump right in. Hi Himanshu, what is generative AI?

    01:21

    Himanshu: Generative AI refers to a type of AI that can create new content. It is a subset of deep learning, where the models are trained not to make predictions but rather to generate output on their own.

    Think of generative AI as an artist who looks at a lot of paintings and learns the patterns and styles present in them. Once it has learned these patterns, it can generate new paintings that resembles what it learned.

    01:48

    Lois: Let's take an example to understand this better. Suppose we want to train a generative AI model to draw a dog. How would we achieve this?

    Himanshu: You would start by giving it a lot of pictures of dogs to learn from. The AI does not know anything about what a dog looks like. But by looking at these pictures, it starts to figure out common patterns and features, like dogs often have pointy ears, narrow faces, whiskers, etc. You can then ask it to draw a new picture of a dog.

    The AI will use the patterns it learned to generate a picture that hopefully looks like a dog. But remember, the AI is not copying any of the pictures it has seen before but creating a new image based on the patterns it has learned. This is the basic idea behind generative AI. In practice, the process involves a lot of complex maths and computation, and there are different techniques and architectures that can be used, such as variational autoencoders (VAs) and Generative Adversarial Networks (GANs).

    02:48

    Nikita: Himanshu, where is generative AI used in the real world?

    Himanshu: Generative AI models have a wide variety of applications across numerous domains. For the image generation, generative models like GANs are used to generate realistic images. They can be used for tasks, like creating artwork, synthesizing images of human faces, or transforming sketches into photorealistic images.

    For text generation, large language models like GPT 3, which are generative in nature, can create human-like text. This has applications in content creation, like writing articles, generating ideas, and again, conversational AI, like chat bots, customer service agents. They are also used in programming for code generation and debugging, and much more.

    For music generation, generative AI models can also be used. They create new pieces of music after being trained on a specific style or collection of tunes. A famous example is OpenAI's MuseNet.

    03:42

    Lois: You mentioned large language models in the context of text-based generative AI. So, let’s talk a little more about it. Himanshu, what exactly are large language models?

    Himanshu: LLMs are a type of artificial intelligence models built to understand, generate, and process human language at a massive scale. They were primarily designed for sequence to sequence tasks such as machine translation, where an input sequence is transformed into an output sequence.

    LLMs can be used to translate text from one language to another. For example, an LLM could be used to translate English text into French. To do this job, LLM is trained on a massive data set of text and code which allows it to learn the patterns and relationships that exist between different languages. The LLM translates, “How are you?” from English to French, “Comment allez-vous?”

    It can also answer questions like, what is the capital of France? And it would answer the capital of France is Paris. And it will write an essay on a given topic. For example, write an essay on French Revolution, and it will come up with a response like with a title and introduction.

    04:53

    Lois: And how do LLMs actually work?

    Himanshu: So, LLM models are typically based on deep learning architectures such as transformers. They are also trained on vast amount of text data to learn language patterns and relationships, again, with a massive number of parameters usually in order of millions or even billions. LLMs have also the ability to comprehend and understand natural language text at a semantic level. They can grasp context, infer meaning, and identify relationships between words and phrases.

    05:26

    Nikita: What are the most important factors for a large language model?

    Himanshu: Model size and parameters are crucial aspects of large language models and other deep learning models. They significantly impact the model’s capabilities, performance, and resource requirement. So, what is model size? The model size refers to the amount of memory required to store the model's parameter and other data structures. Larger model sizes generally led to better performance as they can capture more complex patterns and representation from the data.

    The parameters are the numerical values of the model that change as it learns to minimize the model's error on the given task. In the context of LLMs, parameters refer to the weights and biases of the model's transformer layers. Parameters are usually measured in terms of millions or billions. For example, GPT-3, one of the largest LLMs to date, has 175 billion parameters making it extremely powerful in language understanding and generation.

    Tokens represent the individual units into which a piece of text is divided during the processing by the model. In natural language, tokens are usually words, subwords, or characters. Some models have a maximum token limit that they can process and longer text can may require truncation or splitting. Again, balancing model size, parameters, and token handling is crucial when working with LLMs.

    06:49

    Nikita: But what’s so great about LLMs?

    Himanshu: Large language models can understand and interpret human language more accurately and contextually. They can comprehend complex sentence structures, nuances, and word meanings, enabling them to provide more accurate and relevant responses to user queries. This model can generate human-like text that is coherent and contextually appropriate. This capability is valuable for context creation, automated writing, and generating personalized response in applications like chatbots and virtual assistants. They can perform a variety of tasks.

    Large language models are very versatile and adaptable to various industries. They can be customized to excel in applications such as language translation, sentiment analysis, code generation, and much more. LLMs can handle multiple languages making them valuable for cross-lingual tasks like translation, sentiment analysis, and understanding diverse global content.

    Large language models can be again, fine-tuned for a specific task using a minimal amount of domain data. The efficiency of LLMs usually grows with more data and parameters.

    07:55

    Lois: You mentioned the “sequence to sequence tasks” earlier. Can you explain the concept in simple terms for us?

    Himanshu: Understanding language is difficult for computers and AI systems. The reason being that words often have meanings based on context. Consider a sentence such as Jane threw the frisbee, and her dog fetched it.

    In this sentence, there are a few things that relate to each other. Jane is doing the throwing. The dog is doing the fetching. And it refers to the frisbee. Suppose we are looking at the word “it” in the sentence. As a human, we understand easily that “it” refers to the frisbee. But for a machine, it can be tricky.

    The goal in sequence problems is to find patterns, dependencies, or relationships within the data and make predictions, classification, or generate new sequences based on that understanding.

    08:48

    Lois: And where are sequence models mostly used?

    Himanshu: Some common example of sequence models includes natural language processing, which we call NLP, tasks such as machine translation, text generation sentiment analysis, language modeling involve dealing with sequences of words or characters.

    Speech recognition. Converting audio signals into text, involves working with sequences of phonemes or subword units to recognize spoken words. Music generation. Generating new music involves modeling musical sequences, nodes, and rhythms to create original compositions.

    Gesture recognition. Sequences of motion or hand gestures are used to interpret human movements for applications, such as sign language recognition or gesture-based interfaces. Time series analysis. In fields such as finance, economics, weather forecasting, and signal processing, time series data is used to predict future values, detect anomalies, and understand patterns in temporal data.

    09:56

    The Oracle University Learning Community is an excellent place to collaborate and learn with Oracle experts and fellow learners. Grow your skills, inspire innovation, and celebrate your successes. All your activities, from liking a post to answering questions and sharing with others, will help you earn a valuable reputation, badges, and ranks to be recognized in the community.

    Visit mylearn.oracle.com to get started.

    10:23

    Nikita: Welcome back! Himanshu, what would be the best way to solve those sequence problems you mentioned? Let’s use the same sentence, “Jane threw the frisbee, and her dog fetched it” as an example.

    Himanshu: The solution is transformers. It's like model has a bird's eye view of the entire sentence and can see how all the words relate to each other. This allows it to understand the sentence as a whole instead of just a series of individual words. Transformers with their self-attention mechanism can look at all the words in the sentence at the same time and understand how they relate to each other.

    For example, transformer can simultaneously understand the connections between Jane and dog even though they are far apart in the sentence.

    11:13

    Nikita: But how?

    Himanshu: The answer is attention, which adds context to the text. Attention would notice dog comes after frisbee, fetched comes after dog, and it comes after fetched.

    Transformer does not look at it in isolation. Instead, it also pays attention to all the other words in the sentence at the same time. But considering all these connections, the model can figure out that “it” likely refers to the frisbee.

    The most famous current models that are emerging in natural language processing tasks consist of dozens of transformers or some of their variants, for example, GPT or Bert.

    11:53

    Lois: I was looking at the AI Foundations course on MyLearn and came across the terms “prompt engineering” and “fine tuning.” Can you shed some light on them?

    Himanshu: A prompt is the input or initial text provided to the model to elicit a specific response or behavior. So, this is something which you write or ask to a language model. Now, what is prompt engineering? So prompt engineering is the process of designing and formulating specific instructions or queries to interact with a large language model effectively.
    In the context of large language models, such as GPT 3 or Burt, prompts are the input text or questions given to the model to generate responses or perform specific tasks.

    The goal of prompt engineering is to ensure that the language model understands the user's intent correctly and provide accurate and relevant responses.

    12:47

    Nikita: That sounds easy enough, but fine tuning seems a bit more complex. Can you explain it with an example?

    Himanshu: Imagine you have a versatile recipe robot named chef bot. Suppose that chef bot is designed to create delicious recipes for any dish you desire.

    Chef bot recognizes the prompt as a request for a pizza recipe, and it knows exactly what to do.

    However, if you want chef bot to be an expert in a particular type of cuisine, such as Italian dishes, you fine-tune chef bot for Italian cuisine by immersing it in a culinary crash course filled with Italian cookbooks, traditional Italian recipes, and even Italian cooking shows.

    During this process, chef bot becomes more specialized in creating authentic Italian recipes, and this option is called fine tuning. LLMs are general purpose models that are pre-trained on large data sets but are often fine-tuned to address specific use cases.

    When you combine prompt engineering and fine tuning, and you get a culinary wizard in chef bot, a recipe robot that is not only great at understanding specific dish requests but also capable of following a specific dish requests and even mastering the art of cooking in a particular culinary style.
    14:08
    Lois: Great! Now that we’ve spoken about all the major components, can you walk us through the life cycle of a large language model?

    Himanshu: The life cycle of a Large Language Model, LLM, involves several stages, from its initial pre-training to its deployment and ongoing refinement.

    The first of this lifecycle is pre-training. The LLM is initially pre-trained on a large corpus of text data from the internet. During pre-training, the model learns grammar, facts, reasoning abilities, and general language understanding. The model predicts the next word in a sentence given the previous words, which helps it capture relationships between words and the structure of language.

    The second phase is fine tuning initialization. After pre-training, the model's weights are initialized, and it's ready for task-specific fine tuning. Fine tuning can involve supervised learning on labeled data for specific tasks, such as sentiment analysis, translation, or text generation.

    The model is fine-tuned on specific tasks using a smaller domain-specific data set. The weights from pre-training are updated based on the new data, making the model task aware and specialized. The next phase of the LLM life cycle is prompt engineering. So this phase craft effective prompts to guide the model's behavior in generating specific responses.
    Different prompt formulations, instructions, or context can be used to shape the output.

    15:34

    Nikita: Ok… we’re with you so far. What’s next?

    Himanshu: The next phase is evaluation and iteration. So models are evaluated using various metrics to access their performance on specific tasks. Iterative refinement involves adjusting model parameters, prompts, and fine tuning strategies to improve results.

    So as a part of this step, you also do few shot and one shot inference. If needed, you further fine tune the model with a small number of examples. Basically, few shot or a single example, one shot for new tasks or scenarios.

    Also, you do the bias mitigation and consider the ethical concerns. These biases and ethical concerns may arise in models output. You need to implement measures to ensure fairness in inclusivity and responsible use.

    16:28

    Himanshu: The next phase in LLM life cycle is deployment. Once the model has been fine-tuned and evaluated, it is deployed for real world applications. Deployed models can perform tasks, such as text generation, translation, summarization, and much more. You also perform monitoring and maintenance in this phase.

    So you continuously monitor the model's performance and output to ensure it aligns with desired outcomes. You also periodically update and retrain the model to incorporate new data and to adapt to evolving language patterns. This overall life cycle can also consist of a feedback loop, whether you gather feedbacks from users and incorporate it into the model’s improvement process.

    You use this feedback to further refine prompts, fine tuning, and overall model behavior. RLHF, which is Reinforcement Learning with Human Feedback, is a very good example of this feedback loop. You also research and innovate as a part of this life cycle, where you continue to research and develop new techniques to enhance the model capability and address different challenges associated with it.

    17:40

    Nikita: As we’re talking about the LLM life cycle, I see that fine tuning is not only about making an LLM task specific. So, what are some other reasons you would fine tune an LLM model?
    Himanshu: The first one is task-specific adaptation. Pre-trained language models are trained on extensive and diverse data sets and have good general language understanding. They excel in language generation and comprehension tasks, though the broad understanding of language may not lead to optimal performance in specific task.

    These models are not task specific. So the solution is fine tuning. The fine tuning process customizes the pre-trained models for a specific task by further training on task-specific data to adapt the model's knowledge.

    The second reason is domain-specific vocabulary. Pre-trained models might lack knowledge of specific words and phrases essential for certain tasks in fields, such as legal, medical, finance, and technical domains. This can limit their performance when applied to domain-specific data.

    Fine tuning enables the model to adapt and learn domain-specific words and phrases. These words could be, again, from different domains.

    18:56

    Himanshu: The third reason to fine tune is efficiency and resource utilization. So fine tuning is computationally efficient compared to training from scratch.

    Fine tuning reuses the knowledge from pre-trained models, saving time and resources. Fine tuning requires fewer iterations to achieve task-specific competence. Shorter training cycles expedite the model development process. It conserves computational resources, such as GPU memory and processing power.

    Fine tuning is efficient in quicker model deployment. It has faster time to production for real world applications. Fine tuning is, again, scalable, enabling adaptation to various tasks with the same base model, which further reduce resource demands, and it leads to cost saving for research and development.
    The fourth reason to fine tune is of ethical concerns. Pre-trained models learns from diverse data. And those potentially inherit different biases. Fine tune might not completely eliminate biases. But careful curation of task-specific data ensures avoiding biased or harmful vocabulary. The responsible uses of domain-specific terms promotes ethical AI applications.

    20:14

    Lois: Thank you so much, Himanshu, for spending time with us. We had such a great time learning from you. If you want to learn more about the topics discussed today, head over to mylearn.oracle.com and get started on our free AI Foundations course.

    Nikita: Yeah, we even have a detailed walkthrough of the architecture of transformers that you might want to check out. Join us next week for a discussion on the OCI AI Portfolio. Until then, this is Nikita Abraham…
    Lois: And Lois Houston signing off!

    20:44

    That’s all for this episode of the Oracle University Podcast. If you enjoyed listening, please click Subscribe to get all the latest episodes. We’d also love it if you would take a moment to rate and review us on your podcast app. See you again on the next episode of the Oracle University Podcast.

  • Did you know that the concept of deep learning goes way back to the 1950s? However, it is only in recent years that this technology has created a tremendous amount of buzz (and for good reason!). A subset of machine learning, deep learning is inspired by the structure of the human brain, making it fascinating to learn about. In this episode, Lois Houston and Nikita Abraham interview Senior Principal OCI Instructor Hemant Gahankari about deep learning concepts, including how Convolution Neural Networks work, and help you get your deep learning basics right. Oracle MyLearn: https://mylearn.oracle.com/ou/learning-path/become-an-oci-ai-foundations-associate-2023/127177 Oracle University Learning Community: https://education.oracle.com/ou-community LinkedIn: https://www.linkedin.com/showcase/oracle-university/ X (formerly Twitter): https://twitter.com/Oracle_Edu Special thanks to Arijit Ghosh, David Wright, Himanshu Raj, and the OU Studio Team for helping us create this episode. -------------------------------------------------------- Episode Transcript:

    00:00

    The world of artificial intelligence is vast and everchanging. And with all the buzz around it lately, we figured it was the perfect time to revisit our AI Made Easy series. Join us over the next few weeks as we chat about all things AI, helping you to discover its endless possibilities. Ready to dive in? Let’s go!

    00:33

    Welcome to the Oracle University Podcast, the first stop on your cloud journey. During this series of informative podcasts, we’ll bring you foundational training on the most popular
    Oracle technologies. Let’s get started!

    00:47

    Lois: Hello and welcome to the Oracle University Podcast. I’m Lois Houston, Director of Innovation Programs with Oracle University, and with me is Nikita Abraham, Principal Technical Editor.

    Nikita: Hi everyone!

    Lois: Today, we’re going to focus on the basics of deep learning with our Senior Principal OCI Instructor, Hemant Gahankari.

    Nikita: Hi Hemant! Thanks for being with us today. So, to get started, what is deep learning?

    01:14

    Hemant: Hi Niki and hi Lois. So, deep learning is a subset of machine learning that focuses on training Artificial Neural Networks, abbreviated as ANN, to solve a task at hand. Say, for example, image classification. A very important quality of the ANN is that it can process raw data like pixels of an image and extract patterns from it. These patterns are treated as features to predict the outcomes.

    Let us say we have a set of handwritten images of digits 0 to 9. As we know, everyone writes the digits in a slightly different way. So how do we train a machine to identify a handwritten digit? For this, we use ANN.

    ANN accepts image pixels as inputs, extracts patterns like edges and curves and so on, and correlates these patterns to predict an outcome. That is what digit does the image has in this case.

    02:17

    Lois: Ok, so what you’re saying is given a bunch of pixels, ANN is able to process pixel data, learn an internal representation of the data, and predict outcomes. That’s so cool! So, why do we need deep learning?

    Hemant: We need to specify features while we train machine learning algorithm. With deep learning, features are automatically extracted from the data. Internal representation of features and their combinations is built to predict outcomes by deep learning algorithms. This may not be feasible manually.
    Deep learning algorithms can make use of parallel computations. For this, usually data is split into small batches and processed parallelly. So these algorithms can process large amount of data in a short time to learn the features and their combinations. This leads to scalability and performance. In short, deep learning complements machine learning algorithms for complex data for which features cannot be described easily.

    03:21

    Nikita: What can you tell us about the origins of deep learning?

    Hemant: Some of the deep learning concepts like artificial neuron, perceptron, and multilayer perceptron existed as early as 1950s. One of the most important concept of using backpropagation for training ANN came in 1980s.
    In 1990s, convolutional neural networks were also introduced for image analysis tasks. Starting 2000, GPUs were introduced. And 2010 onwards, GPUs became cheaper and widely available. This fueled the widespread adoption of deep learning uses like computer vision, natural language processing, speech recognition, text translation, and so on.

    In 2012, major networks like AlexNet and Deep-Q Network were built. 2016 onward, generative use cases of the deep learning also started to come up. Today, we have widely adopted deep learning for a variety of use cases, including large language models and many other types of generative models.

    04:32

    Lois: Hemant, what are various applications of deep learning algorithms?

    Hemant: Deep learning algorithms are targeted at a variety of data and applications. For data, we have images, videos, text, and audio. For images, applications can be image classification, object detection, and segmentation. For textual data, applications are to translate the text or detect a sentiment of a text. For audio, the applications can be music generation, speech to text, and so on.

    05:05

    Lois: It's important that we select the right deep learning algorithm based on the data and application, right? So how do we do that?

    Hemant: For image tasks like image classification, object detection, image segmentation, or facial recognition, CNN is a suitable architecture. For text, we have a choice of the latest transformers or LSTM or even RNN. For generative tasks like text summarization or question answering, transformers is a good choice. For generating images, text to image generation, transformers, GANs, or diffusion models are available choices.

    05:45

    Nikita: Let’s dive a little deeper into Artificial Neural Networks. Can you tell us more about them, Hemant?

    Hemant: Artificial Neural Networks are inspired by the human brain. They are made up of interconnected nodes called as neurons.

    Nikita: And how are inputs processed by a neuron?

    Hemant: In ANN, we assign weights to the connection between neurons. Weighted inputs are added up. And if the sum crosses a specified threshold, the neuron is fired. And the outputs of a layer of neuron become an input to another layer.

    06:16

    Lois: Hemant, tell us about the building blocks of ANN so we understand this better.

    Hemant: So first building block is layers. We have input layer, output layer, and multiple hidden layers. The input layer and output layer are mandatory. And the hidden layers are optional. The layers consist of neurons. Neurons are computational units, which accept an input and produce an output.
    Weights determine the strength of connection between neurons. So the connections could be between input and a neuron, or it could be between a neuron and another neuron. Activation functions work on the weighted sum of inputs to the neuron and produce an output. Additional input to the neuron that allows a certain degree of flexibility is called as a bias.

    07:05

    Nikita: I think we’ve got the components of ANN straight but maybe you should give us an example. You mentioned this example earlier…of needing to train ANN to recognize handwritten digits from images. How would we go about that?
    Hemant: For that, we have to collect a large number of digit images, and we need to train ANN using these images.
    So, in this case, the images consist of 28 by 28 pixels, which act as input layer. For the output, we have 10 neurons which represent digits 0 to 9. And we have multiple hidden layers. So, for example, we have two hidden layers which are consisting of 16 neurons each.

    The hidden layers are responsible for capturing the internal representation of the raw images. And the output layer is responsible for producing the desired outcomes. So, in this case, the desired outcome is the prediction of whether the digit is 0 or 1 or up to digit 9.

    So how do we train this particular ANN? So the first thing we use is the backpropagation algorithm. During training, we show an image to the ANN. Let’s say it is an image of digit 2. So we expect output neuron for digit 2 to fire. But in real, let’s say output neuron of a digit 6 fired.

    08:28

    Lois: So, then, what do we do?

    Hemant: We know that there is an error. So we correct an error. We adjust the weights of the connection between neurons based on a calculation, which we call as backpropagation algorithm. By showing thousands of images and adjusting the weights iteratively, ANN is able to predict correct outcomes for most of the input images. This process of adjusting weights through backpropagation is called as model training.

    09:01

    Do you have an idea for a new course or learning opportunity? We’d love to hear it! Visit the Oracle University Learning Community and share your thoughts with us on the Idea Incubator. Your suggestion could find a place in future development projects!

    Visit mylearn.oracle.com to get started.

    09:22

    Nikita: Welcome back! Let’s move on to CNN. Hemant, what is a Convolutional Neural Network?

    Hemant: CNN is a type of deep learning model specifically designed for processing and analyzing grid-like data, such as images and videos. In the ANN, the input image is converted to a single dimensional array and given as an input to the network.
    But that does not work well with the image data because image data is inherently two dimensional. CNN works better with the two dimensional data. The role of the CNN is to reduce the images into a form, which is easier to process and without losing features, which are critical for getting a good prediction.

    10:10

    Lois: A CNN has different layers, right? Could you tell us a bit about them?

    Hemant: The first one is input layer. Input layer is followed by feature extraction layers, which is a combination and repetition of convolutional layer with ReLu activation and a pooling layer.
    And this is followed by a classification layer. These are the fully connected output layers, where the classification occurs as output classes. The class with the highest probability is the predicted class. And finally, we have the dropout layer. This layer is a regularization technique used to prevent overfitting in the network.

    10:51

    Nikita: And what are the top applications of CNN?

    Hemant: One of the most widely used applications of CNNs is image classification. For example, classifying whether an image contains a specific object, say cat or a dog.

    CNNs are also used for object detection tasks. The goal here is to draw bounding boxes around objects in an image. CNNs can perform pixel-level segmentation, where each pixel in the image is labeled to represent different objects or regions. CNNs are employed for face recognition tasks as well, identifying and verifying individuals based on facial features.

    CNNs are widely used in medical image analysis, helping with tasks like tumor detection, diagnosis, and classification of various medical conditions. CNNs play an important role in the development of self-driving cars, helping them to recognize and understand the road traffic signs, pedestrians, and other vehicles.

    12:02
    Nikita: Hemant, let’s talk about sequence models. What are they and what are they used for?

    Hemant: Sequence models are used to solve problems, where the input data is in the form of sequences. The sequences are ordered lists of data points or events.

    The goal in sequence models is to find patterns and dependencies within the data and make predictions, classifications, or even generate new sequences.

    12:31

    Lois: Can you give us some examples of sequence models?

    Hemant: Some common examples of the sequence models are in natural language processing, deep learning models are used for tasks, such as machine translation, sentiment analysis, or text generation. In speech recognition, deep learning models are used to convert a recorded audio into text.

    Deep learning models can generate new music or create original compositions. Even sequences of hand gestures are interpreted by deep learning models for applications like sign language recognition. In fields like finance or weather prediction, time series data is used to predict future values.

    13:15

    Nikita: Which deep learning models can be used to work with sequence data?

    Hemant: Recurrent Neural Networks, abbreviated as RNNs, are a class of neural network architectures specifically designed to handle sequential data. Unlike traditional feedforward neural network, RNNs have a feedback loop that allows information to persist across different timesteps.

    The key features of RNNs is their ability to maintain an internal state often referred to as a hidden state or memory, which is updated as the network processes each element in the input sequence. The hidden state is used as input to the network for the next time step, allowing the model to capture dependencies and patterns in the data that are spread across time.

    14:07

    Nikita: Are there various types of RNNs?

    Hemant: There are different types of RNN architectures based on application.

    One of them is one to one. This is like feed forward neural network and is not suited for sequential data. A one to many model produces multiple output values for one input value. Music generation or sequence generation are some applications using this architecture.

    A many to one model produces one output value after receiving multiple input values. Example is sentiments analysis based on the review. Many to many model produces multiple output values for multiple input values. Examples are machine translation and named entity recognition.

    RNN does not perform well when it comes to capturing long-term dependencies. This is due to the vanishing gradients problem, which is overcome by using LSTM model.

    15:11

    Lois: Another acronym. What is LSTM, Hemant?

    Hemant: Long Short-Term memory, abbreviated as LSTM, works by using a specialized memory cell and a gating mechanism to capture long term dependencies in the sequential data.
    The key idea behind LSTM is to selectively remember or forget information over time, enabling the model to maintain relevant information over long sequences, which helps overcome the vanishing gradients problem.

    15:45

    Nikita: Can you take us, step-by-step, through the working of LSTM?

    Hemant: At each timestep, the LSTM takes an input vector representing the current data point in the sequence. The LSTM also receives the previous hidden state and cell state. These represent what the LSTM has remembered and forgotten up to the current point in the sequence.

    The core of the LSTM lies in its gating mechanisms, which include three gates: the input gate, the forget gate, and the output gate. These gates are like the filters that control the flow of information within the LSTM cell. The input gate decides what new information from the current input should be added to the memory cell.

    The forget gate determines what information in the current memory cell should be discarded or forgotten. The output gate regulates how much of the current memory cell should be exposed as the output of the current time step.

    16:52

    Lois: Thank you, Hemant, for joining us in this episode of the Oracle University Podcast. I learned so much today. If you want to learn more about deep learning, visit mylearn.oracle.com and search for the Oracle Cloud Infrastructure AI Foundations course.

    And remember, the AI Foundations course and certification are free. So why not get started now?

    Nikita: Right, Lois. In our next episode, we will discuss generative AI and language learning models. Until then, this is Nikita Abraham…

    Lois: And Lois Houston signing off!

    17:26

    That’s all for this episode of the Oracle University Podcast. If you enjoyed listening, please click Subscribe to get all the latest episodes. We’d also love it if you would take a moment to rate
    and review us on your podcast app. See you again on the next episode of the Oracle University Podcast.

  • Does machine learning feel like too convoluted a topic? Not anymore! Listen to hosts Lois Houston and Nikita Abraham, along with Senior Principal OCI Instructor Hemant Gahankari, talk about foundational machine learning concepts and dive into how supervised learning, unsupervised learning, and reinforcement learning work. Oracle MyLearn: https://mylearn.oracle.com/ou/learning-path/become-an-oci-ai-foundations-associate-2023/127177 Oracle University Learning Community: https://education.oracle.com/ou-community LinkedIn: https://www.linkedin.com/showcase/oracle-university/ X (formerly Twitter): https://twitter.com/Oracle_Edu Special thanks to Arijit Ghosh, David Wright, Himanshu Raj, and the OU Studio Team for helping us create this episode. -------------------------------------------------------- Episode Transcript:

    00:00
    The world of artificial intelligence is vast and everchanging. And with all the buzz around it lately, we figured it was the perfect time to revisit our AI Made Easy series. Join us over the next few weeks as we chat about all things AI, helping you to discover its endless possibilities. Ready to dive in? Let’s go!

    00:33

    Welcome to the Oracle University Podcast, the first stop on your cloud journey. During this series of informative podcasts, we’ll bring you foundational training on the most popular Oracle technologies. Let’s get started!

    00:47

    Lois: Hello and welcome to the Oracle University Podcast. I’m Lois Houston, Director of Innovation Programs with Oracle University, and with me is Nikita Abraham, Principal
    Technical Editor.

    Nikita: Hi everyone! Last week, we went through the basics of artificial intelligence and we’re going to take it a step further today by talking about some foundational machine learning concepts. After that, we’ll discuss the three main types of machine learning models: supervised learning, unsupervised learning, and reinforcement learning.

    01:18

    Lois: Hemant Gahankari, a Senior Principal OCI Instructor, joins us for this episode. Hi Hemant! Let’s dive right in. What is machine learning? How does it work?

    Hemant: Machine learning is a subset of artificial intelligence that focuses on creating computer systems that can learn and predict outcomes from given examples without being explicitly programmed. It is powered by algorithms that incorporate intelligence into machines by automatically learning from a set of examples usually provided as data.

    01:54

    Nikita: Give us a few examples of machine learning… so we can see what it can do for us.

    Hemant: Machine learning is used by all of us in our day-to-day life.

    When we shop online, we get product recommendations based on our preferences and our shopping history. This is powered by machine learning.

    We are notified about movies recommendations based on our viewing history and choices of other similar viewers. This too is driven by machine learning.

    While browsing emails, we are warned of a spam mail because machine learning classifies whether the mail is spam or not based on its content. In the increasingly popular self-driving cars, machine learning is responsible for taking the car to its destination.

    02:45

    Lois: So, how does machine learning actually work?

    Hemant: Let us say we have a computer and we need to teach the computer to differentiate between a cat and a dog. We do this by describing features of a cat or a dog.

    Dogs and cats have distinguishing features. For example, the body color, texture, eye color are some of the defining features which can be used to differentiate a cat from a dog. These are collectively called as input data.

    We also provide a corresponding output, which is called as a label, which can be a dog or a cat in this case. By describing a specific set of features, we can say that it is a cat or a dog.
    Machine learning model is first trained with the data set. Training data set consists of a set of features and output labels, and is given as an input to the machine learning model.
    During the process of training, machine learning model learns the relation between input features and corresponding output labels from the provided data. Once the model learns from the data, we have a trained model.

    Once the model is trained, it can be used for inference. Inference is a process of getting a prediction by giving a data point. In this example, we input features of a cat or a dog, and the trained model predicts the output that is a cat or a dog label.

    The types of machine learning models depend on whether we have a labeled output or not.

    04:28

    Nikita: Oh, there are different types of machine learning models?
    Hemant: In general, there are three types of machine learning approaches.

    In supervised machine learning, labeled data is used to train the model. Model learns the relation between features and labels.
    Unsupervised learning is generally used to understand relationships within a data set. Labels are not used or are not available.

    Reinforcement learning uses algorithms that learn from outcomes to make decisions or choices.

    05:06

    Lois: Ok…supervised learning, unsupervised learning, and reinforcement learning. Where do we use each of these machine learning models?

    Hemant: Some of the popular applications of supervised machine learning are disease detection, weather forecasting, stock price prediction, spam detection, and credit scoring. For example, in disease detection, the patient data is input to a machine learning model, and machine learning model predicts if a patient is suffering from a disease or not.

    For unsupervised machine learning, some of the most common real-time applications are to detect fraudulent transactions, customer segmentation, outlier detection, and targeted marketing campaigns. So for example, given the transaction data, we can look for patterns that lead to fraudulent transactions.

    Most popular among reinforcement learning applications are automated robots, autonomous driving cars, and playing games.

    06:12

    Nikita: I want to get into how each type of machine learning works. Can we start with supervised learning?

    Hemant: Supervised learning is a machine learning model that learns from labeled data. The model learns the mapping between the input and the output.

    As a house price predictor model, we input house size in square feet and model predicts the price of a house. Suppose we need to develop a machine learning model for detecting cancer, the input to the model would be the person's medical details, the output would be whether the tumor is malignant or not.

    06:50

    Lois: So, that mapping between the input and output is fundamental in supervised learning.

    Hemant: Supervised learning is similar to a teacher teaching student. The model is trained with the past outcomes and it learns the relationship or mapping between the input and output.

    In supervised machine learning model, the outputs can be either categorical or continuous. When the output is continuous, we use regression. And when the output is categorical, we use classification.

    07:26

    Lois: We want to keep this discussion at a high level, so we’re not going to get into regression and classification. But if you want to learn more about these concepts and look at some demonstrations, visit mylearn.oracle.com.

    Nikita: Yeah, look for the Oracle Cloud Infrastructure AI Foundations course and you’ll find a lot of resources that you can make use of.

    07:51

    The Oracle University Learning Community is an excellent place to collaborate and learn with Oracle experts and fellow learners. Grow your skills, inspire innovation, and celebrate your successes. All your activities, from liking a post to answering questions and sharing with others, will help you earn a valuable reputation, badges, and ranks to be recognized in the community.

    Visit mylearn.oracle.com to get started.

    08:19

    Nikita: Welcome back! So that was supervised machine learning. What about unsupervised machine learning, Hemant?

    Hemant: Unsupervised machine learning is a type of machine learning where there are no labeled outputs. The algorithm learns the patterns and relationships in the data and groups similar data items. In unsupervised machine learning, the patterns in the data are explored explicitly without being told what to look for.

    For example, if you give a set of different-colored LEGO pieces to a child and ask to sort it, it may the LEGO pieces based on any patterns they observe. It could be based on same color or same size or same type. Similarly, in unsupervised learning, we group unlabeled data sets.

    One more example could be-- say, imagine you have a basket of various fruits-- say, apples, bananas, and oranges-- and your task is to group these fruits based on their similarities. You observe that some fruits are round and red, while others are elongated and yellow. Without being told explicitly, you decide to group the round and red fruits together as one cluster and the elongated and yellow fruits as another cluster. There you go. You have just performed an unsupervised learning task.

    09:42

    Lois: Where is unsupervised machine learning used? Can you take us through some use cases?

    Hemant: The first use case of unsupervised machine learning is market segmentation. In market segmentation, one example is providing the purchasing details of an online shop to a clustering algorithm. Based on the items purchased and purchasing behavior, the clustering algorithm can identify customers based on the similarity between the products purchased. For example, customers with a particular age group who buy protein diet products can be shown an advertisement of sports-related products.

    The second use case is on outlier analysis. One typical example for outlier analysis is to provide credit card purchase data for clustering. Fraudulent transactions can be detected by a bank by using outliers. In some transaction, amounts are too high or recurring. It signifies an outlier.

    The third use case is recommendation systems. An example for recommendation systems is to provide users' movie viewing history as input to a clustering algorithm. It clusters users based on the type or rating of movies they have watched. The output helps to provide personalized movie recommendations to users. The same applies for music recommendations also.

    11:14

    Lois: And finally, Hemant, let’s talk about reinforcement learning.

    Hemant: Reinforcement learning is like teaching a dog new tricks. You reward it when it does something right, and over time, it learns to perform these actions to get more rewards. Reinforcement learning is a type of Machine Learning that enables an agent to learn from its interaction with the environment, while receiving feedback in the form of rewards or penalties without any labeled data.


    Reinforcement learning is more prevalent in our daily lives than we might realize. The development of self-driving cars and autonomous drones rely heavily on reinforcement learning to make real time decisions based on sensor data, traffic conditions, and safety considerations.

    Many video games, virtual reality experiences, and interactive entertainment use reinforcement learning to create intelligent and challenging computer-controlled opponents. The AI characters in games learn from player interactions and become more difficult to beat as the game progresses.

    12:25

    Nikita: Hemant, take us through some of the terminology that’s used with reinforcement learning.

    Hemant: Let us say we want to train a self-driving car to drive on a road and reach its destination. For this, it would need to learn how to steer the car based on what it sees in front through a camera. Car and its intelligence to steer on the road is called as an agent.

    More formally, agent is a learner or decision maker that interacts with the environment, takes actions, and learns from the feedback received. Environment, in this case, is the road and its surroundings with which the car interacts. More formally, environment is the external system with which the agent interacts. It is the world or context in which the agent operates and receives feedback for its actions.

    What we see through a camera in front of a car at a moment is a state. State is a representation of the current situation or configuration of the environment at a particular time. It contains the necessary information for the agent to make decisions. The actions in this example are to drive left, or right, or keep straight. Actions are a set of possible moves or decisions that the agent can take in a given state.

    Actions have an impact on the environment and influence future states. After driving through the road many times, the car learns what action to take when it views a road through the camera. This learning is a policy. Formally, policy is a strategy or mapping that the agent uses to decide which action to take in a given state. It defines the agent's behavior and determines how it selects actions.

    14:13

    Lois: Ok. Say we’re talking about the training loop of reinforcement learning in the context of training a dog to learn tricks. We want it to pick up a ball, roll, sit…

    Hemant: Here the dog is an agent, and the place it receives training is the environment. While training the dog, you provide a positive reward signal if the dog picks it right and a warning or punishment if the dog does not pick up a trick. In due course, the dog gets trained by the positive rewards or negative punishments.

    The same tactics are applied to train a machine in the reinforcement learning. For machines, the policy is the brain of our agent. It is a function that tells what actions to take when in a given state. The goal of reinforcement learning algorithm is to find a policy that will yield a lot of rewards for the agent if the agent follows that policy referred to as the optimal policy.
    Through a process of learning from experiences and feedback, the agent becomes more proficient at making good decisions and accomplishing tasks. This process continues until eventually we end up with the optimal policy. The optimal policy is learned through training by using algorithms like Deep Q Learning or Q Learning.

    15:40

    Nikita: So through multiple training iterations, it gets better. That’s fantastic. Thanks, Hemant, for joining us today. We’ve learned so much from you.

    Lois: Remember, the course and certification are free, so if you’re interested, make sure you log in to mylearn.oracle.com and get going. Join us next week for another episode of the Oracle University Podcast. Until then, I’m Lois Houston…

    Nikita: And Nikita Abraham signing off!

    16:09

    That’s all for this episode of the Oracle University Podcast. If you enjoyed listening, please click Subscribe to get all the latest episodes. We’d also love it if you would take a moment to rate and review us on your podcast app. See you again on the next episode of the Oracle University Podcast.

  • You probably interact with artificial intelligence (AI) more than you realize. So, there’s never been a better time to start figuring out how it all works. Join Lois Houston and Nikita Abraham as they decode the fundamentals of AI so that anyone, irrespective of their technical background, can leverage the benefits of AI and tap into its infinite potential. Together with Senior Cloud Engineer Nick Commisso, they take you through key AI concepts, common AI tasks and domains, and the primary differences between AI, machine learning, and deep learning. Oracle MyLearn: https://mylearn.oracle.com/ou/learning-path/become-an-oci-ai-foundations-associate-2023/127177 Oracle University Learning Community: https://education.oracle.com/ou-community LinkedIn: https://www.linkedin.com/showcase/oracle-university/ X (formerly Twitter): https://twitter.com/Oracle_Edu Special thanks to Arijit Ghosh, David Wright, Himanshu Raj, and the OU Studio Team for helping us create this episode. -------------------------------------------------------- Episode Transcript:

    00:00

    The world of artificial intelligence is vast and everchanging. And with all the buzz around it lately, we figured it was the perfect time to revisit our AI Made Easy series. Join us over the next few weeks as we chat about all things AI, helping you to discover its endless possibilities. Ready to dive in? Let’s go!

    00:33

    Welcome to the Oracle University Podcast, the first stop on your cloud journey. During this series of informative podcasts, we’ll bring you foundational training on the most popular Oracle technologies. Let’s get started!

    00:46

    Nikita: Hello and welcome to the Oracle University Podcast. I’m Nikita Abraham, Principal Technical Editor with Oracle University, and with me is Lois Houston, Director of Innovation Programs.

    Lois: Hi there! Welcome to a new season of the Oracle University Podcast. I’m so excited about this season because we’re going to delve into the world of artificial intelligence. In upcoming episodes, we’ll talk about the fundamentals of artificial intelligence and machine learning. And we’ll discuss neural network architectures, generative AI and large language models, the OCI AI stack, and OCI AI services.

    01:27

    Nikita: So, if you’re an IT professional who wants to start learning about AI and ML or even if you’re a student who is familiar with OCI or similar cloud services, but have no prior exposure to this field, you’ll want to tune in to these episodes.

    Lois: That’s right, Niki. So, let’s get started. Today, we’ll talk about the basics of artificial intelligence with Senior Cloud Engineer Nick Commisso. Hi Nick! Thanks for joining us today. So, let’s start right at the beginning. What is artificial intelligence?

    01:57

    Nick: Well, the ability of machines to imitate the cognitive abilities and problem solving capabilities of human intelligence can be classified as artificial intelligence or AI.

    02:08

    Nikita: Now, when you say capabilities and abilities, what are you referring to?

    Nick: Human intelligence is the intellectual capability of humans that allows us to learn new skills through observation and mental digestion, to think through and understand abstract concepts and apply reasoning, to communicate using a language and understand the nonverbal cues, such as facial recognition, tone variation, and body language.

    You can handle objections in real time, even in a complex setting. You can plan for short and long-term situations or projects. And, of course, you can create music and art or invent something new like an original idea.

    If you can replicate any of these human capabilities in machines, this is artificial general intelligence or AGI. So in other words, AGI can mimic human sensory and motor skills, performance, learning, and intelligence, and use these abilities to carry out complicated tasks without human intervention.
    When we apply AGI to solve problems with specific and narrow objectives, we call it artificial intelligence or AI.

    03:16

    Lois: It seems like AI is everywhere, Nick. Can you give us some examples of where AI is used?

    Nick: AI is all around us, and you've probably interacted with AI, even if you didn't realize it. Some examples of AI can be viewing an image or an object and identifying if that is an apple or an orange. It could be examining an email and classifying it spam or not. It could be writing computer language code or predicting the price of an older car.

    So let's get into some more specifics of AI tasks and the nature of related data. Machine learning, deep learning, and data science are all associated with AI, and it can be confusing to distinguish.

    03:57

    Nikita: Why do we need AI? Why’s it important?

    Nick: AI is vital in today's world, and with the amount of data that's generated, it far exceeds the human ability to absorb, interpret, and actually make decisions based on that data. That's where AI comes in handy by enhancing the speed and effectiveness of human efforts.

    So here are two major reasons why we need AI. Number one, we want to eliminate or reduce the amount of routine tasks, and businesses have a lot of routine tasks that need to be done in large numbers. So things like approving a credit card or a bank loan, processing an insurance claim, recommending products to customers are just some example of routine tasks that can be handled.

    And second, we, as humans, need a smart friend who can create stories and poems, designs, create code and music, and have humor, just like us.

    04:54

    Lois: I’m onboard with getting help from a smart friend! There are different domains in AI, right, Nick?

    Nick: We have language for language translation; vision, like image classification; speech, like text to speech; product recommendations that can help you cross-sell products; anomaly detection, like detecting fraudulent transactions; learning by reward, like self-driven cars. You have forecasting with weather forecasting. And, of course, generating content like image from text.

    05:24

    Lois: There are so many applications. Nick, can you tell us more about these commonly used AI domains like language, audio, speech, and vision?

    Nick: Language-related AI tasks can be text related or generative AI. Text-related AI tasks use text as input, and the output can vary depending on the task. Some examples include detecting language, extracting entities in a text, or extracting key phrases and so on.

    Consider the example of translating text. There's many text translation tools where you simply type or paste your text into a given text box, choose your source and target language, and then click translate.

    Now, let's look at the generative AI tasks. They are generative, which means the output text is generated by a model. Some examples are creating text like stories or poems, summarizing a text, answering questions, and so on. Let's take the example of ChatGPT, the most well-known generative chat bot. These bots can create responses from their training on large language models, and they continuously grow through machine learning.

    06:31

    Nikita: What can you tell us about using text as data?

    Nick: Text is inherently sequential, and text consists of sentences. Sentences can have multiple words, and those words need to be converted to numbers for it to be used to train language models. This is called tokenization. Now, the length of sentences can vary, and all the sentences lengths need to be made equal. This is done through padding.

    Words can have similarities with other words, and sentences can also be similar to other sentences. The similarity can be measured through dot similarity or cosine similarity. We need a way to indicate that similar words or sentences may be close by. This is done through representation called embedding.

    07:17

    Nikita: And what about language AI models?

    Nick: Language AI models refer to artificial intelligence models that are specifically designed to understand, process, and generate natural language. These models have been trained on vast amounts of textual data that can perform various natural language processing or NLP tasks.

    The task that needs to be performed decides the type of input and output. The deep learning model architectures that are typically used to train models that perform language tasks are recurrent neural networks, which processes data sequentially and stores hidden states, long short-term memory, which processes data sequentially that can retain the context better through the use of gates, and transformers, which processes data in parallel. It uses the concept of self-attention to better understand the context.

    08:09

    Lois: And then there’s speech-related AI, right?

    Nick: Speech-related AI tasks can be either audio related or generative AI. Speech-related AI tasks use audio or speech as input, and the output can vary depending on the task. For example, speech-to-text conversion or speaker recognition, voice conversion, and so on. Generative AI tasks are generative in nature, so the output audio is generated by a model. For example, you have music composition and speech synthesis.
    Audio or speech is digitized as snapshots taken in time. The sample rate is the number of times in a second an audio sample is taken. Most digital audio have a sampling rate of 44.1 kilohertz, which is also the sampling rate for audio CDs.
    Multiple samples need to be correlated to make sense of the data. For example, listening to a song for a fraction of a second, you won't be able to infer much about the song, and you'll probably need to listen to it a little bit longer.

    Audio and speech AI models are designed to process and understand audio data, including spoken language. These deep-learning model architectures are used to train models that perform language with tasks-- recurrent neural networks, long short-term memory, transformers, variational autoencoders, waveform models, and Siamese networks. All of the models take into consideration the sequential nature of audio.

    09:42

    Did you know that Oracle University offers free courses on Oracle Cloud Infrastructure? You’ll find training on everything from cloud computing, database, and security to artificial intelligence and machine learning, all free to subscribers. So, what are you waiting for? Pick a topic, leverage the Oracle University Learning Community to ask questions, and then sit for your certification.

    Visit mylearn.oracle.com to get started.

    10:10

    Nikita: Welcome back! Now that we’ve covered language and speech-related tasks, let’s move on to vision-related tasks.

    Nick: Vision-related AI tasks could be image related or generative AI. Image-related AI tasks will use an image as an input, and the output depends on the task. Some examples are classifying images, identifying objects in an image, and so on. Facial recognition is one of the most popular image-related tasks that is often used for surveillance and tracking of people in real time, and it's used in a lot of different fields, including security, biometrics, law enforcement, and social media.
    For generative AI tasks, the output image is generated by a model. For example, creating an image from a contextual description, generating images of a specific style or a high resolution, and so on. It can create extremely realistic new images and videos by generating original 3D models of an object, machine components, buildings, medication, people, and even more.

    11:14

    Lois: So, then, here again I need to ask, how do images work as data?

    Nick: Images consist of pixels, and pixels can be either grayscale or color. And we can't really make out what an image is just by looking at one pixel.

    The task that needs to be performed decides the type of input needed and the output produced. Various architectures have evolved to handle this wide variety of tasks and data. These deep-learning model architectures are typically used to train models that perform vision tasks-- convolutional neural networks, which detects patterns in images; learning hierarchical representations of visual features; YOLO, which is You Only Look Once, processes the image and detects objects within the image; and then you have generative adversarial networks, which generates real-looking images.

    12:04

    Nikita: Nick, earlier you mentioned other AI tasks like anomaly detection, recommendations, and forecasting. Could you tell us more about them?

    Nick: Anomaly detection. This is time-series data, which is required for anomaly detection, and it can be a single or multivariate for fraud detection, machine failure, etc.
    Recommendations. You can recommend products using data of similar products or users. For recommendations, data of similar products or similar users is required.

    Forecasting. Time-series data is required for forecasting and can be used for things like weather forecasting and predicting the stock price.

    12:43

    Lois: Nick, help me understand the difference between artificial intelligence, machine learning, and deep learning. Let’s start with AI.

    Nick: Imagine a self-driving car that can make decisions like a human driver, such as navigating traffic or detecting pedestrians and making safe lane changes. AI refers to the broader concept of creating machines or systems that can perform tasks that typically require human intelligence. Next, we have machine learning or ML. Visualize a spam email filter that learns to identify and move spam emails to the spam folder, and that's based on the user's interaction and email content. Now, ML is a subset of AI that focuses on the development of algorithms that enable machines to learn from and make predictions or decisions based on data.

    To understand what an algorithm is in the context of machine learning, it refers to a specific set of rules, mathematical equations, or procedures that the machine learning model follows to learn from data and make predictions on. And finally, we have deep learning or DL. Think of an image recognition software that can identify specific objects or animals within images, such as recognizing cats in photos on the internet. DL is a subfield of ML that uses neural networks with many layers, deep neural networks, to learn and make sense of complex patterns in data.

    14:12

    Nikita: Are there different types of machine learning?

    Nick: There are several types of machine learning, including supervised learning, unsupervised learning, and reinforcement learning. Supervised learning where the algorithm learns from labeled data, making predictions or classifications. Unsupervised learning is an algorithm that discovers patterns and structures in unlabeled data, such as clustering or dimensionality reduction. And then, you have reinforcement learning, where agents learn to make predictions and decisions by interacting with an environment and receiving rewards or punishments.

    14:47

    Lois: Can we do a deep dive into each of these types you just mentioned? We can start with the supervised machine learning algorithm.

    Nick: Let's take an example of how a credit card company would approve a credit card. Once the application and documents are submitted, a verification is done, followed by a credit score check and another 10 to 15 days for approval. And how is this done? Sometimes, purely manually or by using a rules engine where you can build rules, give new data, get a decision.
    The drawbacks are slow. You need skilled people to build and update rules, and the rules keep changing. The good thing is that the businesses had a lot of insight as to how the decisions were made. Can we build rules by looking at the past data?
    We all learn by examples. Past data is nothing but a set of examples. Maybe reviewing past credit card approval history can help. Through a process of training, a model can be built that will have a specific intelligence to do a specific task. The heart of training a model is an algorithm that incrementally updates the model by looking at the data samples one by one.
    And once it's built, the model can be used to predict an outcome on a new data. We can train the algorithm with credit card approval history to decide whether to approve a new credit card. And this is what we call supervised machine learning. It's learning from labeled data.

    16:13

    Lois: Ok, I see. What about the unsupervised machine learning algorithm?

    Nick: Data does not have a specific outcome or a label as we know it. And sometimes, we want to discover trends that the data has for potential insights. Similar data can be grouped into clusters. For example, retail marketing and sales, a retail company may collect information like household size, income, location, and occupation so that the suitable clusters could be identified, like a small family or a high spender and so on. And that data can be used for marketing and sales purposes.
    Regulating streaming services. A streaming service may collect information like viewing sessions, minutes per session, number of unique shows watched, and so on. That can be used to regulate streaming services. Let's look at another example. We all know that fruits and vegetables have different nutritional elements. But do we know which of those fruits and vegetables are similar nutritionally?

    For that, we'll try to cluster fruits and vegetables' nutritional data and try to get some insights into it. This will help us include nutritionally different fruits and vegetables into our daily diets. Exploring patterns and data and grouping similar data into clusters drives unsupervised machine learning.

    17:34

    Nikita: And then finally, we come to the reinforcement learning algorithm.

    Nick: How do we learn to play a game, say, chess? We'll make a move or a decision, check to see if it's the right move or feedback, and we'll keep the outcomes in your memory for the next step you take, which is learning. Reinforcement learning is a machine learning approach where a computer program learns to make decisions by trying different actions and receiving feedback. It teaches agents how to solve tasks by trial and error.

    This approach is used in autonomous car driving and robots as well.

    18:06

    Lois: We keep coming across the term “deep learning.” You’ve spoken a bit about it a few times in this episode, but what is deep learning, really? How is it related to machine learning?

    Nick: Deep learning is all about extracting features and rules from data. Can we identify if an image is a cat or a dog by looking at just one pixel? Can we write rules to identify a cat or a dog in an image? Can the features and rules be extracted from the raw data, in this case, pixels?

    Deep learning is really useful in this situation. It's a special kind of machine learning that trains super smart computer networks with lots of layers. And these networks can learn things all by themselves from pictures, like figuring out if a picture is a cat or a dog.

    18:49

    Lois: I know we’re going to be covering this in detail in an upcoming episode, but before we let you go, can you briefly tell us about generative AI?

    Nick: Generative AI, a subset of machine learning, creates diverse content like text, audio, images, and more. These models, often powered by neural networks, learn patterns from existing data to craft fresh and creative output. For instance, ChatGPT generates text-based responses by understanding patterns in text data that it's been trained on. Generative AI plays a vital role in various AI tasks requiring content creation and innovation.

    19:28

    Nikita: Thank you, Nick, for sharing your expertise with us. To learn more about AI, go to mylearn.oracle.com and search for the Oracle Cloud Infrastructure AI Foundations course. As you complete the course, you’ll find skill checks that you can attempt to solidify your learning.

    Lois: And remember, the AI Foundations course on MyLearn also prepares you for the Oracle Cloud Infrastructure 2023 AI Foundations Associate certification. Both the course and the certification are free, so there’s really no reason NOT to take the leap into AI, right Niki?

    Nikita: That’s right, Lois!

    Lois: In our next episode, we will look at the fundamentals of machine learning. Until then, this is Lois Houston…
    Nikita: And Nikita Abraham signing off!

    20:13

    That’s all for this episode of the Oracle University Podcast. If you enjoyed listening, please click Subscribe to get all the latest episodes. We’d also love it if you would take a moment to rate and review us on your podcast app. See you again on the next episode of the Oracle University Podcast.