Powered by RND
PodcastsTechnologyOracle University Podcast
Listen to Oracle University Podcast in the App
Listen to Oracle University Podcast in the App
(524)(250,057)
Save favourites
Alarm
Sleep timer

Oracle University Podcast

Podcast Oracle University Podcast
Oracle Corporation
Oracle University Podcast delivers convenient, foundational training on popular Oracle technologies such as Oracle Cloud Infrastructure, Java, Autonomous Databa...

Available Episodes

5 of 107
  • What is Multicloud?
    This week, hosts Lois Houston and Nikita Abraham are shining a light on multicloud, a game-changing strategy involving the use of multiple cloud service providers. Joined by Senior Manager of CSS OU Cloud Delivery Samvit Mishra, they discuss why multicloud is becoming essential for businesses, offering freedom from vendor lock-in and the ability to cherry-pick the best services. They also talk about Oracle's pioneering role in multicloud and its partnerships with Microsoft Azure, Google Cloud, and Amazon Web Services.   Oracle Cloud Infrastructure Multicloud Architect Professional: https://mylearn.oracle.com/ou/course/oracle-cloud-infrastructure-multicloud-architect-professional-2025-/144474 Oracle University Learning Community: https://education.oracle.com/ou-community LinkedIn: https://www.linkedin.com/showcase/oracle-university/ X: https://x.com/Oracle_Edu   Special thanks to Arijit Ghosh, David Wright, Kris-Ann Nansen, and the OU Studio Team for helping us create this episode.   -----------------------------------------------------------------   Episode Transcript: 00:00 Welcome to the Oracle University Podcast, the first stop on your cloud journey. During this series of informative podcasts, we’ll bring you foundational training on the most popular Oracle technologies. Let’s get started! 00:25 Lois: Hello and welcome to the Oracle University Podcast! I’m Lois Houston, Director of Innovation Programs with Oracle University, and with me today is Nikita Abraham, Team Lead: Editorial Services. Nikita: Hi everyone! Today, we’re moving on to multicloud. In our next three episodes, we’ll be discussing what multicloud is and why there’s so much of a buzz around it. With us is Samvit Mishra, Senior Manager of CSS OU Cloud Delivery. Hi Samvit! Thanks for joining us today. 00:55 Samvit: Hi Niki! Hi Lois! Happy to be here. Lois: So Samvit, we know that Oracle has been an early adopter of multicloud and a pioneer in multicloud services. But for anyone who isn’t familiar with what multicloud is, can you explain what it means? Samvit: Absolutely, Lois. Multicloud is a very simple, basic concept. It is the coordinated use of cloud services from more than one cloud service provider.  01:21 Nikita: But why would someone want to use more than one cloud service provider? Samvit: There are many reasons why a customer might want to leverage two or more cloud service providers. First, it addresses the very real concern of mitigating or avoiding vendor lock-in. By using multiple providers, companies can avoid being tied down to one vendor and maintain their flexibility. 01:45 Lois: That’s like not putting all your eggs in one basket, so to speak. Samvit: Exactly. Another reason is that customers want the best of breed. What that means is basically leveraging or utilizing the best product from one cloud service provider and pairing it against the best product from another cloud service provider. Getting a solution out of the combined products…out of the coordinated use of those services.  02:14 Nikita: So, it sounds like multicloud is becoming the new normal. And as we were saying before, Oracle was a pioneer in this space. But why did we embrace multicloud so wholeheartedly? Samvit: We recognized that our customers were already moving in this direction. Independent studies from Flexera found that 89% of the subjects of the study used multicloud. And we conducted our own study and came to similar numbers. Over 90% of our customers use two or more cloud service providers.  HashiCorp, the big infrastructure as code company, came to similar numbers as well, 94%. They basically asked companies if multicloud helped them advance their business goals. And 94% said yes. And all this is very recent data.  03:04 Lois: Can you give us the backstory of Oracle’s entry into the multicloud space? Samvit: Sure. So back in 2019, Oracle and Microsoft Azure joined forces and announced the interconnect service between Oracle Cloud Infrastructure and Microsoft Azure. The interconnect was between Oracle’s FastConnect and Microsoft Azure’s ExpressRoute. This was a big step, as it allowed for a direct connection between the two providers without needing a third-party. And now we have several of our data centers interconnected already. So, out of the 48 regions, 12 of them are already interconnected. And more are coming. And you can very easily configure the interconnect.  This interconnectivity guarantees low latency, high throughput, and predictable performance. And also, on the OCI side, there are no egress or ingress charges for your data.  There's also a product called Oracle Database@Azure, where Oracle and Microsoft deliver Oracle Database services in Microsoft Azure data centers.  04:12  Lois: That’s exciting! And what are the benefits of this product? Samvit: The main advantage is the co-location. Being co-located with the Microsoft Azure data center offers you native integration between Azure and OCI resources. No manual configuration of a private interconnect between the two providers is needed. You're going to get microsecond latency between your applications and the Oracle Database.  The OCI-native Exadata Database Service is available on Oracle Database@Azure. This enables you to get the highest level of Oracle Database performance, scalability, security, and availability. And your tech support can be provided either from Microsoft or from Oracle.  05:03 Unlock the power of AI Vector Search with our new course and certification. Get more accurate search results, handle complex datasets easily, and supercharge your data-driven decisions. From now through May 15, 2025, we are waiving the certification exam fee (valued at $245). Visit mylearn.oracle.com to enroll. 05:30 Nikita: Welcome back. Samvit, there have been some new multicloud milestones from OCI, right? Can you tell us about them?  Samvit: That’s right, Niki. I am thrilled to share the latest news on Oracle’s multicloud partnerships. We now have agreements with Microsoft Azure, Google Cloud, and Amazon Web Services.  So, as we were discussing earlier, with Azure, we have the Oracle Interconnect for Azure and Oracle Database@Azure. Now, with Google Cloud, we have the Oracle Interconnect for Google Cloud. And it is very similar to the Oracle Interconnect for Azure. With Google Cloud, we have physically interconnected data centers and they provide a sub-2 millisecond latency private interconnection. So, you can come in and provision virtual circuits going from Oracle FastConnect to Google Cloud Interconnect.  And the best thing is that there are no egress or ingress charges for your data. The way it is structured is you have your Oracle Cloud Infrastructure on one side, with your virtual cloud network, your subnets, and your resources. And on the other side, you have your Google Cloud router with your virtual private cloud subnet and your resources interconnecting.  You initiate the connectivity on the Google Cloud side, retrieve the service key and provide that service key to Oracle Cloud Infrastructure, and complete the interconnection on the OCI side. So, for example, our US East Ashburn interconnect will match with us-east4 on the Google Cloud side.  07:08 Lois: Now, wasn’t the other major announcement Oracle Database@Google Cloud? Tell us more about that, please. Samvit: With Oracle Database@Google Cloud, you can run your applications on Google Cloud and the database as well inside the Google Cloud platform. That's the Oracle Cloud Infrastructure database co-located in Google Cloud platform data centers. It allows you to run native integration between GCP and OCI resources with no manual configuration of private interconnect between these two cloud service providers.  That means no FastConnect, no Interconnect because, again, the database is located in the Google Cloud data center. And you're going to get microsecond latency and the OCI native Exadata Database Service. So, you're going to gain the highest level of Oracle Database performance, scalability, security, and availability.  08:04 Lois: And how is the tech support managed? Samvit: The technical support is a collaboration between Google Cloud and Oracle Cloud Infrastructure. That means you can either have the technical support provided to completion by Google Cloud or by Oracle. One of us will provide you with an end-to-end solution.  08:22 Nikita: During CloudWorld last year, we also announced Oracle Database@AWS, right?  Samvit: Yes, Niki. That’s where Oracle and Amazon Web Services deliver the Oracle Database service on Oracle Cloud Infrastructure in your AWS data center. This will provide you with native integration between AWS and OCI resources, with no manual configuration of private interconnect between AWS and OCI. And you're getting microsecond latency with the OCI-native Exadata Database Service.  And again, as with Oracle Database@Google Cloud and Oracle Database@Azure, you're gaining the highest level of Oracle Database performance, scalability, security, and availability. And the technical support is provided by either AWS or Oracle all the way to completion. Now, Oracle Database@AWS is currently available in limited preview, with broader availability in the coming months as it expands to new regions to meet the needs of our customers.  09:28 Lois: That’s great. Now, how does Oracle fare when it comes to pricing, especially compared to our major cloud competitors?  Samvit: Our pricing is pretty consistent. You’ll see that in all cases across the world, we have the less expensive solution for you and the highest performance as well.  09:45 Nikita: Let’s move on to some use cases, Samvit. How might a company use the multicloud setup? Samvit: Let’s start with the split-stack architecture between Oracle Cloud Infrastructure and Microsoft Azure.  Like I was saying earlier, this partnership dates back to 2019. And basically, we eliminated the FastConnect partner from the middle. And this will provide you with high throughput, low latency, and very predictable performance, all of this on highly available links. These links are redundant, ensuring business continuity between OCI and Azure.  And you can have your database on the OCI side and your application on Microsoft Azure side or the other way around. You can have SQL Server on Azure and the application running on Oracle Cloud Infrastructure. And this is very easy to configure.  10:34 Lois: It really sounds like Oracle is at the forefront of the multicloud revolution. Thanks so much, Samvit, for shedding light on this exciting topic.  Samvit: It was my pleasure.  Nikita: That's a wrap for today. To learn more about what we discussed, head over to mylearn.oracle.com and search for the Oracle Cloud Infrastructure Multicloud Architect Professional course. In our next episode, we’ll take a close look at Oracle Interconnect for Azure. Until then, this is Nikita Abraham… Lois: And Lois Houston, signing off! 11:05 That’s all for this episode of the Oracle University Podcast. If you enjoyed listening, please click Subscribe to get all the latest episodes. We’d also love it if you would take a moment to rate and review us on your podcast app. See you again on the next episode of the Oracle University Podcast.
    --------  
    11:34
  • Oracle Fusion Cloud Applications Foundations Training & Certifications
    In this special episode of the Oracle University Podcast, hosts Lois Houston and Nikita Abraham dive into Oracle Fusion Cloud Applications and the new courses and certifications on offer. They are joined by Oracle Fusion Apps experts Patrick McBride and Bill Lawson who introduce the concept of Oracle Modern Best Practice (OMBP), explaining how it helps organizations maximize results by mapping Fusion Application features to daily business processes. They also discuss how the new courses educate learners on OMBP and its role in improving Fusion Cloud Apps implementations.   OMBP: https://www.oracle.com/applications/modern-best-practice/ Oracle University Learning Community: https://education.oracle.com/ou-community LinkedIn: https://www.linkedin.com/showcase/oracle-university/ X: https://x.com/Oracle_Edu   Special thanks to Arijit Ghosh, David Wright, Kris-Ann Nansen, Radhika Banka, and the OU Studio Team for helping us create this episode.   ---------------------------------------------------------   Episode Transcript:   00:00 Welcome to the Oracle University Podcast, the first stop on your cloud journey. During this series of informative podcasts, we’ll bring you foundational training on the most popular Oracle technologies. Let’s get started! 00:25 Nikita: Welcome to the Oracle University Podcast! I’m Nikita Abraham, Team Lead of Editorial Services with Oracle University, and with me is Lois Houston, Director of Innovation Programs.  Lois: Hi everyone! For the last two months, we’ve been focusing on all things MySQL. But today, we wanted to share some really exciting news about new courses and certifications on Oracle Fusion Cloud Applications that feature Oracle Modern Best Practice, or OMBP, and Oracle Cloud Success Navigator.  00:57 Nikita: And to tell us more about this, we have two very special guests joining us today. Patrick McBride is a Senior Director from the Fusion Application Development organization. He leads the Oracle Modern Best Practice Program office for Oracle. And Bill Lawson is a Senior Director for Cloud Applications Product Management here at Oracle University. We’ll first ask Patrick about Oracle Modern Best Practice and then move on to Bill for details about the new training and certification we’re offering. Patrick, Bill, thanks for being here today.   Patrick: Hey, Niki and Lois, thanks for the invitation. Happy to be here.  Bill: Hi Niki, Lois. 01:32 Lois: Patrick, let’s start with some basic information about what OMBP are. Can you tell us a little about why they were created? Patrick: Sure, love to. So, modern best practices are more than just a business process. They’re really about translating features and technology into actionable capabilities in our product. So, we've created these by curating industry leading best practices we've collected from our customers over the years. And ensure that the most modern technologies that we've built into the Fusion Application stack are represented inside of those business processes. Our goal is really to help you as customers improve your business operations by easily finding and applying those technologies to what you do every day.  02:18 Nikita: So, by understanding these modern best practice and the technology that enables it, you’re really unlocking the full potential of Fusion Apps.  Patrick: Absolutely. So, the goal is that modern best practice make it really easy for customers, implementers, partners, to see the opportunity and take action. 02:38 Lois: That’s great. OK, so, let’s talk about implementations, Patrick. How do Oracle Modern Best Practice support customers throughout the lifecycle of an Oracle Fusion Cloud implementation? Patrick: What we found during many implementers’ journey with taking our solution and trying to apply it with customers is that customers come in with a long list of capabilities that they're asking us to replicate. What they've always done in the past. And what modern best practice is trying to do is help customers to reimage the art of the possible…what's possible with Fusion by taking advantage of innovative features like AI, like IoT, like, you know, all of the other solutions that we built in to help you automate your processes to help you get the most out of the solution using the latest and greatest technology. So, if you're an implementer, there's a number of ways a modern best practice can help during an implementation. First is that reimagine exercise where you can help the customer see what's possible. And how we can do it in a better way. I think more importantly though, as you go through your implementation, many customers aren't able to get everything done by the time they have to go live. They have a list of things they’ve deferred and modern best practices really establishes itself as a road map for success, so you can go back to it at the completion and see what's left for the opportunity to take advantage of and you can use it to track kind of the continuous innovation that Oracle delivers with every release and see what's changed with that business process and how can I get the most out of it. 04:08 Nikita: Thanks, Patrick. That’s a great primer on OMBP that I’m sure everyone will find very helpful. Patrick: Thanks, Niki. We want our customers to understand the value of modern best practices so they can really maximize their investment in Oracle technology today and in the future as we continue to innovate. 04:24 Lois: Right. And the way we’re doing that is through new training and certifications that are closely aligned with OMBP. Bill, what can you tell us about this? Bill: Yes, sure. So, the new Oracle Fusion Applications Foundations training program is designed to help partners and customers understand Oracle Modern Best Practice and how they improve the entire implementation journey with Fusion Cloud Applications. As a learner, you will understand how to adhere to these practices and how they promise a greater level of success and customer satisfaction. So, whether you’re designing, or implementing, or going live, you’ll be able to get it right on day one. So, like Patrick was saying, these OMBPs are reimagined, industry-standard business processes built into Fusion Applications. So, you'll also discover how technologies like AI, Mobile, and Analytics help you automate tasks and make smarter decisions. You’ll see how data flows between processes and get tips for successful go-lives. So, the training we’re offering includes product demonstrations, key metrics, and design considerations to give you a solid understanding of modern best practice. It also introduces you to Oracle Cloud Success Navigator and how it can be leveraged and relied upon as a trusted source to guide you through every step of your cloud journey, so from planning, designing, and implementation, to user acceptance testing and post-go-live innovations with each quarterly new release of Fusion Applications and those new features. And then, the training also prepares you for Oracle Cloud Applications Foundations certifications.                   05:55 Nikita: Which applications does the training focus on, Bill? Bill: Sure, so the training focuses on four key pillars of Fusion Apps and the associated OMBP with them. For Human Capital Management, we cover Human Resources and Talent Management. For Enterprise Resource Planning, it’s all about Financials, Project Management, and Risk Management. In Supply Chain Management, you’ll look at Supply Chain, Manufacturing, Inventory, Procurement, and more. And for Customer Experience, we’ll focus on Marketing, Sales, and Service. 06:24 Lois: That’s great, Bill. Now, who is the training and certification for?  Bill: That’s a great question. So, it’s really for anyone who wants to get the most out of Oracle Fusion Cloud Applications. It doesn’t matter if you’re an experienced professional or someone new to Fusion Apps, this is a great place to start. It’s even recommended for professionals with experience in implementing other applications, like on-premise products. The goal is to give you a solid foundation in Oracle Modern Best Practice and show you how to use them to improve your implementation approach. We want to make it easy for anyone, whether you’re an implementer, a global process owner, or an IT team employee, to identify every way Fusion Applications can improve your organization. So, if you’re new to Fusion Apps, you’ll get a comprehensive overview of Oracle Fusion Applications and how to use OMBP to improve business operations. If you're already certified in Oracle Cloud Applications and have years of experience, you'll still benefit from learning how OMBP fits into your work.  If you’re an experienced Fusion consultant who is new to Oracle Modern Best Practice processes, this is a good place to begin and learn how to apply them and the latest technology enablers during implementations.  And, lastly, if you’re an on-premise or you have non-Fusion consultant skills looking to upskill to Fusion, this is a great way to begin acquiring the knowledge and skills needed to transition to Fusion and migrate your existing expertise. 07:53 Raise your game with the Oracle Cloud Applications skills challenge. Get free training on Oracle Fusion Cloud Applications, Oracle Modern Best Practice, and Oracle Cloud Success Navigator. Pass the free Oracle Fusion Cloud Foundations Associate exam to earn a Foundations Associate certification. Plus, there’s a chance to win awards and prizes throughout the challenge! What are you waiting for? Join the challenge today by visiting oracle.com/education. 08:27 Nikita: Welcome back! Bill, how long is it going to take me to complete this training program? Bill: So, we wanted to make this program detailed enough so our learners find it valuable, obviously. But at the same time, we didn’t want to make it too long. So, each course is approximately 5 hours or more, and provides folks with all the requisite knowledge they need to get started with Oracle Modern Best Practice and Fusion Applications.  08:51 Lois: Bill, is there anything that I need to know before I take this course? Are there any prerequisites? Bill: No, Lois, there are no prerequisites. Like I was saying, whether you’re fresh out of college or a seasoned professional, this is a great place to start your journey into Fusion Apps and Oracle Modern Best Practice. 09:06 Nikita: That’s great, you know, that there are no barriers to starting. Now, Bill, what can you tell us about the certification that goes along with this new program?  Bill: The best part, Niki, is that it’s free. In fact, the training is also free. We have four courses and corresponding Foundation Associate–level certifications for Human Capital Management, Enterprise Resource Planning, Supply Chain Management, and Customer Experience. So, completing the training prepares you for an hour-long exam with 25 questions. It’s a pretty straightforward way to validate your expertise in Oracle Modern Best Practice and Fusion Apps implementation considerations. 09:40 Nikita: Ok. Say I take this course and certification. What can I do next? Where should my learning journey take me? Bill: So, you’re building knowledge and expertise with Fusion Applications, correct? So, once you take this training and certification, I recommend that you identify a product area you want to specialize in. So, if you take the Foundations training for HCM, you can dive deeper into specialized paths focused on implementing Human Resources, Workforce Management, Talent Management, or Payroll applications, for example.  The same goes for other product areas. If you finish the certification for Foundations in ERP, you may choose to specialize in Finance or Project Management and get your professional certifications there as your next step. So, once you have this foundational knowledge, moving on to advanced learning in these areas becomes much easier. We offer various learning paths with associated professional-level certifications to deepen your knowledge and expertise in Oracle Fusion Cloud Applications. So, you can learn more about these courses by visiting oracle.com/education/training/ to find out more of what Oracle University has to offer. 10:43 Lois: Right. I love that we have a clear path from foundational-level training to more advanced levels. So, as your skills grow, we’ve got the resources to help you move forward.  Nikita: That’s right, Lois. Thanks for walking us through all this, Patrick and Bill. We really appreciate you taking the time to join us on the podcast. Bill: Yeah, it’s always a pleasure to join you on the podcast. Thank you very much. Patrick: Oh, thanks for having me, Lois. Happy to be here. Lois: Well, that’s all the time we have for today. If you have questions or suggestions about anything we discussed today, you can write to us at [email protected]. That’s [email protected]. Until next time, this is Lois Houston… Nikita: And Nikita Abraham, signing off! 11:29 That’s all for this episode of the Oracle University Podcast. If you enjoyed listening, please click Subscribe to get all the latest episodes. We’d also love it if you would take a moment to rate and review us on your podcast app. See you again on the next episode of the Oracle University Podcast.
    --------  
    11:58
  • Monitoring MySQL and HeatWave
    In this episode, Lois Houston and Nikita Abraham chat with MySQL expert Perside Foster on the importance of keeping MySQL performing at its best. They discuss the essential tools for monitoring MySQL, tackling slow queries, and boosting overall performance.   They also explore HeatWave, the powerful real-time analytics engine that brings machine learning and cross-cloud flexibility into MySQL.   MySQL 8.4 Essentials: https://mylearn.oracle.com/ou/course/mysql-84-essentials/141332/226362 Oracle University Learning Community: https://education.oracle.com/ou-community LinkedIn: https://www.linkedin.com/showcase/oracle-university/ X: https://x.com/Oracle_Edu   Special thanks to Arijit Ghosh, David Wright, Kris-Ann Nansen, Radhika Banka, and the OU Studio Team for helping us create this episode.   ----------------------------------------------------------   Episode Transcript:   00:00 Welcome to the Oracle University Podcast, the first stop on your cloud journey. During this series of informative podcasts, we’ll bring you foundational training on the most popular Oracle technologies. Let’s get started! 00:25 Lois: Welcome to the Oracle University Podcast! I’m Lois Houston, Director of Innovation Programs with Oracle University, and with me today is Nikita Abraham, Team Lead: Editorial Services. Nikita: Hey everyone! In our last two episodes, we spoke about MySQL backups, exploring their critical role in data recovery, error correction, data migration, and more. Lois: Today, we’re switching gears to talk about monitoring MySQL instances. We’ll also explore the features and benefits of HeatWave with Perside Foster, a MySQL Principal Solution Engineer at Oracle. 01:02 Nikita: Hi, Perside! We’re thrilled to have you here for one last time this season. So, let’s start by discussing the importance of monitoring systems in general, especially when it comes to MySQL. Perside: Database administrators face a lot of challenges, and these sometimes appear in the form of questions that a DBA must answer. One of the most basic question is, why is the database slow? To address this, the next step is to determine which queries are taking the longest. Queries that take a long time might be because they are not correctly indexed. Then we get to some environmental queries or questions. How can we find out if our replicas are out of date? If lag is too much of a problem? Can I restore my last backup? Is the database storage likely to fill up any time soon? Can and should we consider adding more servers and scaling out the system? And when it comes to users and making sure they're behaving correctly, has the database structure changed? And if so, who did it and what did they do? And more generally, what security issues have arisen? How can I see what has happened and how can I fix it? Performance is always at the top of the list of things a DBA worries about. The underlying hardware will always be a factor but is one of the things a DBA has the least flexibility with changing over the short time. The database structure, choice of data types and the overall size of retained data in the active data set can be a problem. 03:01 Nikita: What are some common performance issues that database administrators encounter? Perside: The sort of SQL queries that the application runs can be an issue. 90% of performance problems come from the SQL index and schema group.  03:18 Lois: Perside, can you give us a checklist of the things we should monitor? Perside: Make sure your system is working. Monitor performance continually. Make sure replication is working. Check your backup. Keep an eye on disk space and how it grows over time. Check when long running queries block your application and identify those queries. Protect your database structure from unauthorized changes. Make sure the operating system itself is working fine and check that nothing unusual happened at that level. Keep aware of security vulnerabilities in your software and operating system and ensure that they are kept updated. Verify that your database memory usage is under control. 04:14 Lois: That’s a great list, Perside. Thanks for that. Now, what tools can we use to effectively monitor MySQL?     Perside: The slow query log is a simple way to monitor long running queries. Two variables control the log queries. Long_query_time. If a query takes longer than this many seconds, it gets logged. And then there's min_exam_row_limit. If a query looks at more than this many rows, it gets logged. The slow query log doesn't ordinarily record administrative statements or queries that don't use indexes. Two variables control this, log_slow_admin_statements and log_queries_not_using_indexes. Once you have found a query that takes a long time to run, you can focus on optimizing the application, either by limiting this type of query or by optimizing it in some way. 05:23 Nikita: Perside, what tools can help us optimize slow queries and manage data more efficiently? Perside: To help you with processing the slow query log file, you can use the MySQL dump slow command to summarize slow queries. Another important monitoring feature of MySQL is the performance schema. It's a system database that provides statistics of how MySQL executes at a low level. Unlike user databases, performance schema does not persist data to disk. It uses its own storage engine that is flushed every time we start MySQL. And it has almost no interaction with the storage media, making it very fast. This performance information belongs only to the specific instance, so it's not replicated to other systems. Also, performance schema does not grow infinitely large. Instead, each row is recorded in a fixed size ring buffer. This means that when it's full, it starts again at the beginning. The SYS schema is another system database that's strongly related to performance schema. 06:49 Nikita: And how can the SYS schema enhance our monitoring efforts in MySQL? Perside: It contains helper objects like views and stored procedures. They help simplify common monitoring tasks and can help monitor server health and diagnose performance issues. Some of the views provide insights into I/O hotspots, blocking and locking issues, statements that use a lot of resources in various statistics on your busiest tables and indexes. 07:26 Lois: Ok… can you tell us about some of the features within the broader Oracle ecosystem that enhance our ability to monitor MySQL? Perside: As an Oracle customer, you also have access to Oracle Enterprise Manager. This tool supports a huge range of Oracle products. And for MySQL, it's used to monitor performance, system availability, your replication topology, InnoDB performance characteristics and locking, bad queries caught by the MySQL Enterprise firewall, and events that are raised by the MySQL Enterprise audit. 08:08 Nikita: What would you say are some of the standout features of Oracle Enterprise Manager? Perside: When you use MySQL in OCI, you have access to some really powerful features. HeatWave MySQL enables continuous monitoring of query statistics and performance. The health monitor is part of the MySQL server and gathers raw data about the performance of queries. You can see summaries of this information in the Performance Hub in the OCI Console. For example, you can see average statement latency or top 100 statements executed. MySQL metrics lets you drill in with your own custom monitoring queries. This works well with existing OCI features that you might already know. The observability and management framework lets you filter by resource type and across several dimensions. And you can configure OCI alarms to be notified when some condition is reached. 09:20 Lois: Perside, could you tell us more about MySQL metrics? Perside: MySQL metrics uses the raw performance data gathered by the health monitor to measure the important characteristic of your servers. This includes CPU and storage usage and information relevant to your database connection and queries executed. With MySQL metrics, you can create your own custom monitoring queries that you can use to feed graphics. This gives you an up to the minute representation of all the performance characteristics that you're interested in. You can also create alarms that trigger on some performance condition. And you can be notified through the OCI alarms framework so that you can be aware instantly when you need to deal with some issue.  10:22 Are you keen to stay ahead in today's fast-paced world? We’ve got your back! Each quarter, Oracle rolls out game-changing updates to its Fusion Cloud Applications. And to make sure you’re always in the know, we offer New Features courses that give you an insider’s look at all of the latest advancements. Don't miss out! Head over to mylearn.oracle.com to get started. 10:47 Nikita: Welcome back! Now, let’s dive into the key features of HeatWave, the cloud service that integrates with MySQL. Can you tell us what HeatWave is all about? Perside: HeatWave is the cloud service for MySQL. MySQL is the world's leading database for web applications. And with HeatWave, you can run your online transaction processing or OLTP apps in the cloud. This gives you all the benefits of cloud deployments while keeping your MySQL-based web application running just like they would on your own premises. As well as OLTP applications, you need to run reports with Business Intelligence and Analytics Dashboards or Online Analytical Processing, or OLAP reports. The HeatWave cluster provides accelerated analytics queries without requiring extraction or transformation to a separate reporting system. This is achieved with an in-memory analytics accelerator, which is part of the HeatWave service. In addition, HeatWave enables you to create Machine Learning models to embed artificial intelligence right there in the database. The ML accelerator performs classification, regression, time-series forecasting, anomaly detection, and other functions provided by the various models that you can embed in your architecture. HeatWave can also work directly with storage outside the database. With HeatWave Lakehouse, you can run queries directly on data stored in object storage in a variety of formats without needing to import that data into your MySQL database. 12:50 Lois: With all of these exciting features in HeatWave, Perside, what core MySQL benefits can users continue to enjoy? Perside: The reason why you chose MySQL in the first place, it's still a relational database and with full transactional support, low latency, and high throughput for your online transaction processing app. It has encryption, compression, and high availability clustering. It also has the same large database support with up to 256 terabytes support. It has advanced security features, including authentication, data masking, and database firewall. But because it's part of the cloud service, it comes with automated patching, upgrades, and backup. And it is fully supported by the MySQL team. 13:50 Nikita: Ok… let’s get back to what the HeatWave service entails. Perside: The HeatWave service is a fully managed MySQL. Through the web-based console, you can deploy your instances and manage backups, enable high availability, resize your instances, create read replicas, and perform many common administration tasks without writing a single line of SQL. It brings with it the power of OCI and MySQL Enterprise Edition. As a managed service, many routine DBA tests are automated. This includes keeping the instances up to date with the latest version and patches. You can run analytics queries right there in the database without needing to extract and transform your databases, or load them in another dedicated analytics system. 14:52 Nikita: Can you share some common use cases for HeatWave? Perside: You have your typical OLTP workloads, just like you'd run on prem, but with the benefit of being managed in the cloud. Analytic queries are accelerated by HeatWave. So your reporting applications and dashboards are way faster. You can run both OLTP and analytics workloads from the same database, keeping your reports up to date without needing a separate reporting infrastructure. 15:25 Lois: I’ve heard a lot about HeatWave AutoML. Can you explain what that is? Perside: HeatWave AutoML enables in-database artificial intelligence and Machine Learning. Externally sourced data stores, such as sensor data exported to CSV, can be read directly from object store. And HeatWave generative AI enables chatbots and LLM content creation. 15:57 Lois: Perside, tell us about some of the key features and benefits of HeatWave. Perside: Autopilot is a suite of AI-powered tools to improve the performance and applicability of your HeatWave queries. Autopilot includes two features that help cut costs when you provision your service. There's auto provisioning and auto shape prediction. They analyze your existing use case and tell you exactly which shape you must provision for your nodes and how many nodes you need. Auto parallel loading is used when you import data into HeatWave. It splits the import automatically into an optimum number of parallel streams to speed up your import. And then there's auto data placement. It distributes your data across the HeatWave cluster node to improve your query retrieval performance. Auto encoding chooses the correct data storage type for your string data, cutting down storage and retrieval time. Auto error recovery automatically recovers a fail node and reloads data if that node becomes unresponsive. Auto scheduling prioritizes incoming queries intelligently. An auto change propagation brings data optimally from your DB system to the acceleration cluster. And then there's auto query time estimation and auto query plan improvement. They learn from your workload. They use those statistics to perform on node adaptive optimization. This optimization allows each query portion to be executed on every local node based on that node's actual data distribution at runtime. Finally, there's auto thread pooling. It adjusts the enterprise thread pool configuration to maximize concurrent throughput. It is workload-aware, and minimizes resource contention, which can be caused by too many waiting transactions. 18:24 Lois: How does HeatWave simplify analytics within MySQL and with external data sources? Perside: HeatWave in Oracle Cloud Infrastructure provides all the features you need for analytics, all in one system. Your classic OLTP application run on the MySQL database that you know and love, provision in a DB system. On-line analytical processing is done right there in the database without needing to extract and load it to another analytic system. With HeatWave Lakehouse, you can even run your analytics queries against external data stores without loading them to your DB system. And you can run your machine learning models and LLMs in the same HeatWave service using HeatWave AutoML and generative AI. HeatWave is not just available in Oracle Cloud Infrastructure. If you're tied to another cloud vendor, such as AWS or Azure, you can use HeatWave from your applications in those cloud too, and at a great price. 19:43 Nikita: That's awesome! Thank you, Perside, for joining us throughout this season on MySQL. These conversations have been so insightful. If you’re interested in learning more about the topics we discussed today, head over to mylearn.oracle.com and search for the MySQL 8.4: Essentials course.  Lois: This wraps up our season on the essentials of MySQL. But before we go, we just want to remind you to write to us if you have any feedback, questions, or ideas for future episodes. Drop us an email at [email protected]. That’s [email protected]. Nikita: Until next time, this is Nikita Abraham… Lois: And Lois Houston, signing off! 20:33 That’s all for this episode of the Oracle University Podcast. If you enjoyed listening, please click Subscribe to get all the latest episodes. We’d also love it if you would take a moment to rate and review us on your podcast app. See you again on the next episode of the Oracle University Podcast.
    --------  
    21:02
  • MySQL Backup - Part 2
    Lois Houston and Nikita Abraham continue their conversation with MySQL expert Perside Foster, with a closer look at MySQL Enterprise Backup. They cover essential features like incremental backups for quick recovery, encryption for data security, and monitoring with MySQL Enterprise Monitor—all to help you manage backups smoothly and securely.   MySQL 8.4 Essentials: https://mylearn.oracle.com/ou/course/mysql-84-essentials/141332/226362 Oracle University Learning Community: https://education.oracle.com/ou-community LinkedIn: https://www.linkedin.com/showcase/oracle-university/ X: https://x.com/Oracle_Edu   Special thanks to Arijit Ghosh, David Wright, Kris-Ann Nansen, Radhika Banka, and the OU Studio Team for helping us create this episode.   --------------------------------------------------------   Episode Transcript:   00:00 Welcome to the Oracle University Podcast, the first stop on your cloud journey. During this series of informative podcasts, we’ll bring you foundational training on the most popular Oracle technologies. Let’s get started! 00:25 Nikita: Welcome to the Oracle University Podcast! I’m Nikita Abraham, Team Lead: Editorial Services  with Oracle University, and with me is Lois Houston, Director of Innovation Programs.  Lois: Hi there! Last week was the first of a two-part episode covering the different types of backups and why they're important. Today, we’ll look at how we can use MySQL Enterprise Backup for efficient and consistent backups. 00:52 Nikita: And of course, we’ve got Perside Foster with us again to walk us through all the details. Perside, could you give us an overview of MySQL Enterprise Backup?  Perside: MySQL Enterprise Backup is a form of physical backup at its core, so it's much faster for large data sets than logical backups, such as the most commonly used MySQL Dump. Because it backs up the data files, it's non-locking and enables either complete system backup or partial backup, focusing only on specific databases. 01:29 Lois: And what are the benefits of using MySQL Enterprise Backup? Perside: You can back up to local storage or direct-to-common-cloud storage types. You can perform incremental backups, which can speed up your backup process greatly. Incremental backups enable point-in-time recovery. It's useful when you need to restore to a point in time before some application or human error occurred. Backups can be compressed to save archival storage requirements and encrypted for regulatory compliance and offline data security. 02:09 Nikita: So we know MySQL Enterprise Backup is an impressive tool, but could you talk more about some of the main features it supports for creating and managing backups? Specifically, which tools are integrated within MySQL Enterprise to support different backup scenarios? Perside: MySQL Enterprise Backup supports SBT, implemented by many common Tape storage systems. MySQL Enterprise Backup supports optimistic backup. This process deals with busy tables separately from the rest of the database. It can record changes that happen in the database during the backup for consistency. In a large data set, this can make a huge difference in performance. MySQL Enterprise Backup runs on all supported platforms. It's available when you have a MySQL Enterprise Edition license. And it comes with Enterprise Edition, but it also is available as a separate package. You can get the most recent version from eDelivery, where you can also get a trial version. If you need a previous release, you can get that from My Oracle Support. It's also available in all versions of MySQL, whether you run a Long-Term support version or an Innovation Release. For LTS releases, MySQL Enterprise Backup supports MySQL instances of the same LTS release. For Innovation releases, it supports the previous LTS release and any subsequent Innovation version within the same LTS family. 04:03 Nikita: How does MySQL Enterprise Monitor manage and track backup processes? Perside: MySQL Enterprise Monitor has a dashboard for monitoring MySQL Enterprise Backup. The dashboard monitors the health of backup process and usage throughout the entire Enterprise fleet, not just a single server. It supports drilling down into specific sub-operations within a backup job. You can see information about full backups, partial backups, and incremental backups. You can configure alerts that will notify you in the event of delays, failures, or backups that have not been performed in some configuration time period. 04:53 Lois: Ok…let’s get into the mechanics. I understand that MySQL Enterprise Backup uses binary logs as part of its backup process. Can you explain how these logs fit into the bigger picture of maintaining database integrity? Perside: MySQL Enterprise Backup is a utility designed specifically for backing up MySQL systems in the most efficient and flexible way. At its simplest, it performs a physical backup of the data files, so it is fast. However, it also records the changes that were made during the time it took to do the backup. So, the result is that you get a consistent backup of the data at the time the backup completed. This backup is not tied to the host system and can be moved to other hosts. It can be used for archiving and is fully supported as part of the MySQL Enterprise Edition. It is, however, tied to the specific version of MySQL from which the backup was taken. So, you cannot use it for upgrades where the destination server is an upgrade from the source. For example, if you take a backup from MySQL 5.7, you can't directly restore it to MySQL 8.0. As a part of MySQL Enterprise Edition, it's not part of the freely available Community Edition. 06:29 Lois: Perside, how do MySQL's binary logs track changes over time? And why is this so critical for backups? Perside: The binary logs record changes to the database. These changes are recorded in a sequential set of files numbered incrementally. MySQL logs changes either in statement-based form, where each log entry records the statement that gives rise to the change, or in row-based form where the actual change row data is recorded. If you select mixed format, then MySQL records statements for most operations and records row for changes where the statement might result in a different row value each time it's run, for example, where there's a generated value like autoincrement. The current log file grows as changes are recorded. When it reaches its maximum configured size, that log file is closed, and the next sequential file is created for new logs. You can make this happen automatically by using the FLUSH BINARY LOGS command. This does not delete any existing log files. 07:59 Nikita: But what happens if you want to delete the log files? Perside: If you want to delete all log files, you can do so manually with the PURGE BINARY LOGS command, either specifying a file or a date time.  08:14 Lois: When it comes to tracking transactions, MySQL provides a couple of methods, right? Can you explain the differences between Global Transaction Identifiers and the traditional log file sequence? Perside: Log files positioning is one of two formats, either legacy, where you specify transactions with a log file in a sequence number, or by using global transaction identifiers, or GTIDs, where each transaction is identified with a universally unique identifier or UUID. When you apply a transaction to the source server, that is when the GTID is attached to the transaction. This makes it particularly useful in replication topologies so that each transaction is uniquely identified by both its server ID and the transaction sequence number. When such a transaction is replicated to other hosts, the transaction retains its original GTID so that you can track when that transaction has propagated to the replicas and has been applied. The global transaction identifier is unique across the entire network. 09:49 Have you mastered the basics of AI? Are you ready to take your skills to the next level? Unlock the potential of advanced AI with our OCI Generative AI Professional course and certification that covers topics like LLMs, the OCI Generative AI Service, and building Q&A chatbots for real-world applications. Head over to mylearn.oracle.comand find out more. 10:19 Nikita: Welcome back! Let’s move on to replication. How does MySQL’s legacy log format handle transactions, and what does that mean for replication timing across different servers? Perside: Legacy format binary logs are non-transactional. This means that a transaction made up of multiple modifications is logged as a sequence of changes. It's possible that different hosts in a replication network apply those changes at different times. Each server that uses legacy binary logging maintain the current applied log position as coordinates based on a combination of binary log files in the position within that log file. 11:11 Nikita: Troubleshooting with legacy logs can be quite complex, right? So, how does the lack of unique transaction IDs make it more difficult to address replication issues? Perside: Because each server has its own log with its own transactions, these modification could have entirely different coordinates, making it challenging to find the specific modification point if you need to do any deep dive troubleshooting, for example, if one replica fell partway through applying a transaction and you need to partially roll it back manually. On the other hand, when you enable GTIDs, the transaction applied on the source host has that globally unique identifier attached to the whole transaction as a sequence of unique IDs. When the second or subsequent servers apply those transactions, they have exactly the same identifier, making it both transaction-safe for MySQL and also easier to troubleshoot if you need to. 12:26 Lois: How can you use binary logs to perform a point-in-time recovery in MySQL? Perside: First, you restore the last full backup. Once you've restarted the restart server, find the current log position of that backup. Either it's GTID or log sequence number. The SHOW BINARY LOG STATUS command shows this information. Then you can use the MySQL binlog utility to replay events from the binary log files, specifying the start and stop position containing the range of log operations that you wish to apply. You can pipe the output of the MySQL bin log to the MySQL client if you want to execute the changes immediately, or you can redirect the output to a script file if you want to examine and perhaps edit the changes. 13:29 Nikita: And how do you save binary logs? Perside: You can save binary logs to use in disaster recovery, for point-in-time restores, or for incremental backups. One way to do this is to flush the logs so that the log file closes and ready for copying. And then copy it to a different server to protect against hardware media failures. You can also use the MySQL binlog utility to create a copy of a set of binary log files in the same format, but to a different file or set of files. This can be useful if you want to run MySQL binlog continuously, copying from the source server binary log to a new location, perhaps in network storage. If you do this, remember that MySQL binlog does not run as a service or daemon, so you'll need to monitor it to make sure it's running continually.  14:39 Lois: Can you take us through how the MySQL Enterprise Backup process works? What does it do when performing a backup? Perside: First, it performs a physical file copy of necessary data and log files. This can be done while the server is fully operational, and it has minimal impact on performance. Once this initial copy is taken, it applies a low impact backup lock on the instance. If you have any tables that are not using InnoDB, the backup cannot guarantee transaction-safe consistency for those tables. It applies a weed lock to those tables so that it can guarantee consistency. Then it briefly locks all logging activity to take a consistent view of the current coordinates of various logs. It releases the weed lock on non-transactional tables. Using the log coordinates that were taken earlier in the process, it gathers all logs for transactions that have occurred since then. Bear in mind that the backup process takes place while the system is active. So, for a consistent backup, it must record not only the data files, but all changes that occurred during the backup. Then it releases the backup lock. The last piece of information recorded is any metadata for the backup itself, including its timing and contents in the final redo log. That completes the backup operation. 16:30 Nikita: And where are the files stored? Perside: The files contained in the backup are saved to the backup location, which can be on the local system or in network storage. The files contained in the backup location include files from the MySQL data directory. Some raw files include InnoDB tablespace, plus any InnoDB file per table tablespace files, and InnoDB log files. Other files might include data files belonging to other storage engines, perhaps MyISAM files. The various log files in instance configuration files are also retained. 17:20 Lois: What steps do you follow to restore a MySQL Enterprise Backup, and how do you guarantee consistency, especially when dealing with incremental backups? Perside: To restore from a backup using MySQL Enterprise Backup, you must first remove any previous files from the data directory. The restore process will fail if you attempt to restore over an existing system or backup. Then you restore the database with appropriate options. If you only restore a single backup, you can use copy, back, and apply log to ensure that the restored system has a consistency state. If you perform a full backup in subsequent incremental backups, you might need to restore multiple times using copy-back, and then use copy-back-and-apply-log only for the final consistent restore operation. The restart server might be on the same host or might be a different host with different configuration. This means that you might have to change some configuration on the restored server, including the operating system ownership of the restored data directory and various MySQL configuration files. If you want to retain the MySQL configuration files from the source server to reproduce on a new server, you should copy those files separately. MySQL Enterprise Backup focuses on the data rather than the server configuration. It does, however, produce configuration files appropriate for the backup. These are similar to the MySQL configuration files, but only contain options relevant for the backup process itself. There's also variables that have been changed to non-default values and all global variable values. These files must be renamed and possibly edited before they are suitable to become configuration files in the newly restored server. For example, the mysqld-auto.cnf file contains a JSON-formatted set of persisted variables. The backup process stores this as the newly named backup mysqld-auto.cnf. If you want to use it in the restored server, you must rename it and place it in the appropriate location so that the restored server can read it. This also applies in part to the auto.cnf file, which contain identifying information for the server. If you are replacing the original server or restoring on the same host, then you can keep the original values. However, this information must be unique within a network. So, if you are restoring this backup to create a replica in a replication topology, you must not include that file and instead start MySQL without it so that it creates its own unique identifying information. 21:14 Nikita: Let’s discuss securing and optimizing backups. How does MySQL Enterprise Backup handle encryption and compression, and what are the critical considerations for each? Perside: You can encrypt backups so that they are secure while moving them around or archiving them. The encrypt option performs the encryption. And you can specify the encryption key either on the command line as a string or a key file that has been generated with some cryptographic algorithm. Encryption only applies to image files, not to backup directories. You can also compress backup with different levels of compression, with higher levels requiring more CPU, but resulting in greater savings in storage. Compression only works with InnoDB data files. If your organization has media management software for performing backups, perhaps to a tape array, then you can use the SBT interface supported in MySQL Enterprise Backup. 22:34 Lois: Before we wrap up, could you share how MySQL Enterprise Backup facilitates the management of backups across a multi-server environment? Perside: As an enterprise solution, it's easy to run MySQL Enterprise Backup in a multi-server environment. We've already mentioned backing up to cloud storage, but you can, of course, back up to a directory or image on network storage that can be mounted locally, perhaps with NFS or some other file system. The "with time" option enables multiple backups within the same backup directory, where each in its own subdirectory named with the timestamp. This is especially useful when you want to run the same backup script repeatedly.  23:32 Lois: Thank you for that detailed overview, Perside. This wraps up our discussion of the various backup types, their pros and cons, and how to select the right option for your needs. In our next session, we’ll explore the different MySQL monitoring strategies and look at the features as well as benefits of Heatwave. Nikita: And if you want to learn more about the topics we discussed today, head over to mylearn.oracle.com and take a look at the MySQL 8.4 Essentials course. Until then, this is Nikita Abraham… Lois: And Lois Houston signing off! 24:06 That’s all for this episode of the Oracle University Podcast. If you enjoyed listening, please click Subscribe to get all the latest episodes. We’d also love it if you would take a moment to rate and review us on your podcast app. See you again on the next episode of the Oracle University Podcast.
    --------  
    24:35
  • MySQL Backup - Part 1
    Join Lois Houston and Nikita Abraham as they kick off a two-part episode on MySQL backups with MySQL expert Perside Foster. In this conversation, they explore the critical role of backups in data recovery, error correction, data migration, and more.   Perside breaks down the differences between logical and physical backups, discussing their pros and cons, and shares valuable insights on how to create a reliable backup strategy to safeguard your data.   MySQL 8.4 Essentials: https://mylearn.oracle.com/ou/course/mysql-84-essentials/141332/226362 Oracle University Learning Community: https://education.oracle.com/ou-community LinkedIn: https://www.linkedin.com/showcase/oracle-university/ X: https://x.com/Oracle_Edu   Special thanks to Arijit Ghosh, David Wright, Kris-Ann Nansen, Radhika Banka, and the OU Studio Team for helping us create this episode.   --------------------------------------------------------   Episode Transcript:   00:00 Welcome to the Oracle University Podcast, the first stop on your cloud journey. During this series of informative podcasts, we’ll bring you foundational training on the most popular Oracle technologies. Let’s get started! 00:25 Lois: Welcome to the Oracle University Podcast! I’m Lois Houston, Director of Innovation Programs with Oracle University, and with me is Nikita Abraham, Team Lead of Editorial Services.  Nikita: Hi everyone! This is Episode 6 in our series on MySQL, and today we’re focusing on how to back up our MySQL instances. This is another two-parter and we’ve got Perside Foster back with us.    00:49 Lois: Perside is a MySQL Principal Solution Engineer at Oracle and she’s here to share her insights on backup strategies and tools. In this episode, we’ll be unpacking the types of backups available and discussing their pros and cons. Nikita: But first let’s start right at the beginning. Perside, why is it essential for us to back up our databases? 01:10 Perside: The whole point of a database is to store and retrieve your business data, your intellectual property. When you back up your data, you are able to do disaster recovery so that your business can continue after some catastrophic event. You can recover from error and revert to a previous known good version of the data. You can migrate effectively from one system to another, or you can create replicas for load balancing or parallel system. You can retain data for archival purposes. Also, you can move large chunks of data to other systems, for example, to create a historical reporting application. And then you can create test environments for applications that are in development and that need real world test data. 02:10 Lois: Yes, and creating a robust backup strategy takes planning, doesn’t it? Perside: As with any complex business critical process, there are challenges with coming up with a backup strategy that you can trust. This requires some careful planning. Any backup process needs to read the data. And in a production system, this will involve adding input/output operations to what might be an already busy system. The resources required might include memory or disk I/O operation and of course, you'll want to avoid downtime, so you might need to schedule the backup for a time when the system is not at peak usage. You'll also need to consider whether the backup is on network storage or some local storage so that you don't exceed limitations for those resources. It isn't enough just to schedule the backup. You'll also need to ensure that they succeed, which you can do with monitoring and consistency check. No backup is effective unless you can use it to restore your data, so you should also test your restore process regularly. If you have business requirements or regulatory commitments that control your data storage policies, you need to ensure your backup also align with those policies. Remember, every backup is a copy of your data at that moment in time. So it is subject to all of your data retention policies, just like your active data. 04:02 Nikita: Let’s talk backup types. Perside, can you break them down for us? Perside: The first category is logical backup. A logical backup creates a script of SQL statements that will re-create the data structure and roles of the live database. Descript can be moved to another server as required. And because it's a script, it needs to be created by and executed on a running server. Because of this, the backup process takes up resources from the source server and is usually slower than a physical media backup.  04:45 Nikita: Ok… what’s the next type? Perside: The next category is physical backup. This is a backup of the actual data file in the server. Bear in mind that the file copy process takes time, and if the database server is active during that time, then the later parts of the copy data will be inconsistent with those parts copied earlier. Ideally, the file must be stable during the backup so that the database state at the start of the copy process is consistent with the state at the end. If there is inconsistency in the data file, then MySQL detects that when the server starts up and it performs a crash recovery. From MySQL’s perspective, there is no difference between a database backup copied from a running server and restarting a server after a crash. In each case, the data files were not saved in a consistent state and crash recovery can take a lot of time on large databases. 06:02 Lois: I see… how can MySQL Enterprise Backup help with this? Perside: MySQL Enterprise Backup has features that enable a consistent backup from a running server. If you create file system copies, either by copying the data files or by performing a file system snapshot, then you must either shut the server down before the copy and undergo crash recovery on the server that starts with those copied files. 06:35 Lois: And aside from logical and physical backups, are there other techniques to back up data? Perside: The binary log enables point-in-time recovery. You can enable replication in a couple of ways. If you start replication and then stop it at a particular time, the replica effectively contains a live backup of the data at the time that you stopped replication. You can also enable a defined replication lag so that the replica is always a known period of time behind the production database. You can also use transportable tablespaces, which are tables or sets of tables in a specific file that you can copy to another server. 07:34 AI is being used in nearly every industry…healthcare, manufacturing, retail, customer service, transportation, agriculture, you name it! And it’s only going to get more prevalent and transformational in the future. It’s no wonder that AI skills are the most sought-after by employers. If you’re ready to dive in to AI, check out the OCI AI Foundations training and certification e that’s available for free! It’s the perfect starting point to build your AI knowledge. So, get going! Head over to mylearn.oracle.com to find out more. 08:14 Nikita: Welcome back! I want to return to the topic of crafting an effective backup strategy. Perside, any advice here? Perside: We can use the different backup types to come up with an effective backup strategy based on how we intend to restore the data. A full backup is a complete copy of the database at some point in time. This can take a lot of time to complete and to restore. An incremental backup contains only the changes since the last backup, as recorded in the binary log files. To restore an incremental backup, you must have restored the previous full backup and any incremental backups taken since then. For example, you might have four incremental backups taken after the last full backup. Each incremental backup contains only the changes since the previous backup. If you want to restore to the point at which you took the fourth incremental backup, then you must restore the full backup and each incremental backup in turn. A differential backup contains all changes since the last full backup. It contains only those portions of the database that are different from the full backup. Over time, the differential backup takes longer because it contains more changes. However, it is easier to restore because if you want to restore to the point at which you took a particular differential backup, you must restore the last full backup and only the differential backup that you require. You can ignore the intermediate differential backups. 10:13 Lois: Can you drill into the different types of backups and explain how each technique is used in various situations? Perside: One of the physical backup techniques is taking a snapshot of the storage medium. The advantages of a snapshot include its quickness. A snapshot is quick to create and restore. It is well-suited to situations where you need to quickly revert to a previous version of the database. For example, in a development environment. A storage snapshot is often a feature of the underlying file system. Linux supports logical volume management or LVM, and many storage area networks or network-attached storage platforms have native snapshot features. You can also use a storage snapshot to supplement a more scheduled logical backup structure. This way, the snapshot enables quick reversion to a previous type, and the logical backup can be used for other purposes, such as archiving or disaster recovery. 11:28 Nikita: Are there any downsides to using snapshots? Perside: First one includes issues with consistency. Because taking a snapshot is quick and does not cause a database performance hit, you might take the snapshot while the system is running. When you restore such a snapshot, MySQL must perform a crash recovery. If you want a consistent snapshot, you must shut down MySQL in advance. Another problem is that the snapshot is a copy of the file system and not of the database. So if you want to transfer it to another system, you must create a database backup from the storage. This adds step in time. A snapshot records the state of the disk at a specific point in time. Initially, the snapshot is practically empty. When a data page changes, the original version of that page is written to the snapshot. Over time, the snapshot storage grows as more data pages are modified. So multiple snapshots result in multiple writes whenever a snapshot data page is changed. To avoid performance deterioration, you should remove or release snapshots when they are no longer in use. Also, because snapshots are tied to the storage medium, they're not suited to moving backups between systems. 13:03 Lois: How about logical backups? How do we create those? Perside: The mysqldump utility has long been a standard way to create logical backups. It creates a script made up of the SQL statement that creates the data and structure in a database or server.  13:21 Nikita: Perside, what are the advantages and disadvantages of mysql dump? Perside: It is an excellent solution for preserving the database structure or for backing up small databases. Logical backups naturally require that the server is running. And they use system resources to produce the SQL statements, so they are less likely for very large databases. The output is a human-readable text file with SQL statements that you can edit as a text file. It can be managed by a source code management system. This allows you to maintain a known good version of the database structure, one that matches your application source code version, which can also include sample data. The mysqldump disadvantages are it needs to run against an active server. So if your production server is busy, you must take action to ensure a consistent backup. This requires locking tables or using the single transaction option, which can result in application delays as the backup completes in a consistent way. Mysqldump does not track changes since the last backup, so it has no way of recording only those rows that have changed. This means it's not suited to perform differential or incremental backups. The scripts must be executed against a running server, so it is slower to restore than using a data dump or physical backup. Additionally, if the database structure has indexes of foreign keys, these conditions must be checked and updated as the data is imported. You can disable these checks during the import but must handle any risks that come from doing so. Because the backup is nothing more than an SQL script, it is easy to restore. You can simply use the MySQL client or any other client tool that can process scripts.  15:46 Nikita: Is there an alternative tool for logical backups? Perside: MySQL Shell is another utility that supports logical backup and restore. Unlike mysqldump, it dumps data in a form that can be processed in parallel, which makes it much faster to use for larger data sets. This enables it to export to or import from remote storage where it can stream data without requiring the whole file before starting the input. It can process multiple chunks of imported data in parallel, and you can monitor progress as it completes. You can also pause import and resume later. For example, in the event of network outage. You can dump and restart table structure, including indexes and primary keys. The utilities in MySQL Shell are exposed through functions. The dumpInstance and dumpSchema utilities back up the whole server or specified schemas respectively. And loadDump is how you restore from such a dump. 17:07 Lois: Thanks for that rundown, Perside! This concludes our first part on MySQL backups. Next week, we’ll take a look at advanced backup methods and the unique features of MySQL Enterprise Backup. Nikita: And if you want to learn more about everything we discussed today, head over to mylearn.oracle.com and explore the MySQL 8.4 Essentials course. Until then, this is Nikita Abraham… Lois: And Lois Houston signing off! 17:37 That’s all for this episode of the Oracle University Podcast. If you enjoyed listening, please click Subscribe to get all the latest episodes. We’d also love it if you would take a moment to rate and review us on your podcast app. See you again on the next episode of the Oracle University Podcast.
    --------  
    18:03

More Technology podcasts

About Oracle University Podcast

Oracle University Podcast delivers convenient, foundational training on popular Oracle technologies such as Oracle Cloud Infrastructure, Java, Autonomous Database, and more to help you jump-start or advance your career in the cloud.
Podcast website

Listen to Oracle University Podcast, Hard Fork and many other podcasts from around the world with the radio.net app

Get the free radio.net app

  • Stations and podcasts to bookmark
  • Stream via Wi-Fi or Bluetooth
  • Supports Carplay & Android Auto
  • Many other app features

Oracle University Podcast: Podcasts in Family

Social
v7.11.0 | © 2007-2025 radio.de GmbH
Generated: 3/14/2025 - 6:06:44 PM