Quantcast
Channel: Symantec Connect - Backup and Recovery
Viewing all 6339 articles
Browse latest View live

Relabeling default media id A0000 to new barcode label on tape

$
0
0
I need a solution

Hi,

I have help to ask with regards to barcode labels. I will have to label our expired media/tape (LTO5) from our tape libray to new barcode label ex T15XXX. The default media id label by the system is the A00001 for example. Do I need to set a barcode rule so everytime a new tape will be inserted to our library with the new barcode label, it will automatically read/scan it? Or I can just do vmchange -barcode <barcode> -m <media_id>
ex. vmchange -barcode T15XXX -m A00001

In doing that, I have to eject first the tapes I need to re-label to new barcode then put back in library and do inventory robot update volume config? So it will read/scan now the new barcode label and will not see the default A00001 media id before?

I have several tapes in our library that has expired and will use this as a monthly tape backup. All are with the defaul media id label A0000? As soon as I have used them all, new tapes will be purchase with new barcode labels. Our tape library has barcode reader/scanner.

Thanks and Regards


Backup Exec

$
0
0
I need a solution

I have a Windows 2008 Standard server attached to a ML6000 tape library. I had a issue where I had to reinstall backup exec and I now can't back up my data. I'm getting error stating that the back up dependency service or group failed to start, Backup Exec Job Engine, and Backup Exec agent Browser. I am currently running BE 2010 R3. Just was forced to load some updates and can't determine the root cause. Any suggestions?

Media Ownership

$
0
0
I need a solution

Hi.  I have an old HP-UX server with Oracle DB whose version (6.5) is no longer supported by NetBackup 7.x.  I was able to backup the server and the Oracle database.  I now want to restore the said backups to a newer server that is supported by NetBackup.  I am unable to do a restore since it will make use of the original server and that the data will pass through the LAN.  I tried doing a FORCE_RESTORE on the new server but somehow it doesn't work and results in an error.  It is now being investigated by Symantec Support.

However, I can't wait for Symantec Support's answer as it is taking them too long to investigate and I have a migration timetable to follow.  I thought of an alternative solution which is to change the ownership of the media from the old server to my new server.  My questions is will this work?  Do I need to change the ownership of the backup media and/or backup images?  What is the fastest way to do it?

If I change the owner of the media, will it automatically change the owner of the images?  Btw, I'm using a VTL.

I'm also open to other suggestions as to how to resolve this.  Thanks in advance!

NDMP restoration stalls

$
0
0
I need a solution

Hi experts,

I am on a restoring NDMP restoration job. I started manually restoring the data 3 days back and today I still find only few of the data is restored.

When I checked the activity monitor it shows "begin read" and the process stalled.

Checked for the bprm and bptm logs. Could not find any clues.

BTW I am having Netbackup 6.0 env with Storagetek nas box.

It would be grateful If I can some workaround for this issue. You may find the status of restoration in the screenshot.

ndmp-stalled.png

Awaiting for valuble replies

Cheers

Albatross

BE2010 Dedup Repository - Migrate to BE2014

$
0
0
I need a solution

Hi all

I have a BE2010 B2D folder that was once running on a Windows 2008 server and was archived off.

I now have a Windows 2012 server and will need to put BE2014 on it, can I simply import the BE2010 B2D folder or is it not that simple?

thanks

Bernard

NBU 7.6 for VMware and transport mode SAN

$
0
0
I need a solution

Hi

This is my enviroment:

Master / Media server on the same host: Windows 2012 R2 physical server with FC dual port connected to a two fabric SAN Brocade 6505.

vCenter 5.1 on a virtual machine Windows 2012 R2.

Backup virtual machines from 15 ESXi 5.1 host.

I want to use SAN transport mode. I find that the only method is configure FC zoning to let's NBU media server access to VMFS-5 formated LUNS by ESXi host. Then, from windows disk administrator console, I found these LUNS and appears like the image I attach to this post.

I'm afraidif someonedecidesmistakenly putonlinethedisk andreformat toNTFS.

Is there anyway to prevent thisproblem?

My configuration method is correct? Now, I can backup my virtual machines faster than NBD tranport mode, but I needto truston these settings to aboid problems.

Backup Exec reporting tool

$
0
0
I need a solution

Hi,

I am new to Backup exec and writing question for the first time.

We have more than 100 Backup exec servers, most of them are 2010 version, and getting upgraded to 2014 now. Currently we are using DPA for reporting success/failure on these servers but as we are upgrading BE servers, BE 2014 is not supported in DPA and we are looking for the solution reporting backup success/failures.

I can configure notification but if there is any symantec/ other vender product for the same, please let me know.

Thank you in advance

Regards

Sudarshan Kshirsagar

(90) media manager received no data for backup image

$
0
0
I need a solution

Bakcups failed with:

03/30/2015 12:40:28 - Info bpbrm (pid=25487) Starting delete snapshot processing
03/30/2015 12:40:29 - Error bpbrm (pid=25487) from client misout7lcsdir01-nebr.sfdc.sbc.com: ERR - Get bpfis state from msprd310-ebr.sfdc.sbc.com failed. status = 25
03/30/2015 12:40:29 - Info bpfis (pid=84440) Backup started
03/30/2015 12:40:29 - Critical bpbrm (pid=25487) from client misout7lcsdir01-nebr.sfdc.sbc.com: FTL - cannot open D:\Program Files\Veritas\NetBackup\online_util\fi_cntl\bpfis.fim.misout7lcsdir01-nebr.sfdc.sbc.com_1427737192.1.0
03/30/2015 12:40:29 - Info bpfis (pid=84440) done. status: 4207
03/30/2015 12:40:29 - Info bpfis (pid=84440) done. status: 4207: Could not fetch snapshot metadata or state files
media manager received no data for backup image  (90)
 

For windows client in backup selection it start the following bat file and backup failed with no data for backup image

D:\Program Files\Veritas\NetBackup\bin\bpstart_notify.NBCMD_Run_Windows_D__Program_Files_Veritas_NetBackup_bin.LUPrep.bat

Please find the attached bptm logs from the master server.

bpbkar logs from the windows client.

Please suggest the solution related with the some file selection and file list for the shadow copy from the bpbkar logs.

THanks.


How to ''Run catalog job on all media that are needed for the backup or restore job'

$
0
0
I need a solution

Hi Backup Exec forum,

I'm running BE 2010 R3. Media Server is verion 13.0 Rev 5204. I am running into this issue http://www.symantec.com/business/support/index?page=content&id=TECH150111 and I'd like to go ahead with solution 2.

However I am not sure what the last step means:

"Run catalog job on all media that are needed for the backup or restore job."

Should I go into 'Media / Online Media', select all media, right click and choose 'Catalog Media' ??

Thank for your help

Boost Hyper-V dedup job rates 50% or more by increasing agent read queue depth beyond 1

$
0
0

Brief Problem Description

  When backing up Hyper-V VM, true sequential read capacity of source storage is not fully utilized, even though no other bottlenecks (CPU, network, target disk, etc) exist in the backup system.  This reduces job rates far below what the physical hardware is capable of.  Other applications (for example simple file copy) achieve sequential read performance far greater than backup exec reads from the source disk.

   The issue manifest as inexplicable reductions in job rate when any other IO is present on storage array.

Problem Cause

  The backup exec remote agent only ever sends one outstanding read request to the source storage.  Keeping such a short queue depth effectively tells the source storage that it is keeping up with the workload required for the sequential read stream.  The adaptive read-ahead of source storage never scales the read-ahead size because of the absence of a higher IO pressure by the sequential read stream of BE remote agent.  This leads to much lower sequential read throughput than the source array is capable of.

Workaround

  Nearly all arrays utilize adaptive read-ahead which gives the best results in most situations provided that applications requiring higher sequential read performance put enough IO pressure to cause adaptive read-ahead to scale to match the desired workload.  However, in the case that the source application does not put appropriate IO pressure, a half-effective work around is to manually specify a large read-ahead amount for the volume in question before backup, returning it to adaptive after backup.

  In my own testing, I found this work around provided a 50% increase in job rate for two concurrent backups vs adaptive setting never scaling read-ahead because it never sees IO pressure. The difference is the elimination of time periods where source agent is needlessly waiting on IO from disk, and CPU stays relatively low.  With the work around, CPU usage of the single core used for dedup thread stays consistently high, since it's not needlessly waiting on storage.

  However, this workaround has a couple of problems:

  • This is a “dumb” setting.  While it increases read-ahead for the desired backup sequential read stream, it also increases read-ahead for any other sequential read streams on the same volume. 

    • This reduces the amount of read-ahead time and buffer space that can possibly be afforded to the backup read stream, since it is shared with other sequential IO, even if very small.  If there is any other sequential IO on the volume, a manual read-ahead setting will never reach the performance of adaptive mechanism scaling read-ahead only for the backup read stream.

    • It also means that even small sequential non-backup reads (say two successive 64K reads) will also invoke a much-longer read ahead, and cause more disks- probably all disks on the array – to have to seek to satisfy the read-ahead setting.  This amounts to more time spent in seeking that could have been spent reading.

  • A large static read-ahead setting is in no way appropriate for normal production use, so it requires setting a script to run before and after backup to turn it on and off.

True Solution

  A real solution to this product issue is to keep more outstanding IOs against the source storage.  In my specific situation, other applications that keep queue depth at 4 experience dramatically better sequential read performance.

  Probably most customers would desire a registry setting to control this. 

  Call the setting to be put on the source host remote agent “ReadQueueDepth” or “ReadPressure” or similar.  Say it could be set to 1, 2, 4, or 8.

  • 1 = Less aggressive, Lowest Backup Performance, less impact on other IO.  Other IO on volume or source array likely to reduce backup job rate.

  • 2 = Default (Default of 1 for a high-performance sequential read application is pretty silly)

  • 4= Aggressive, High Backup Performance

  • 8 = Very Aggressive, Highest Backup Performance.  This puts a high IO pressure on source storage in order to cause adaptive read-ahead algorithms to provide the best rate.  May have significant impact on other IO happening on source storage array.

  In my case, I’d probably set it to the highest setting, which for my client-side dedup hyper-v backup would cause me to hit a single-cpu-core bottleneck at around 300 MB/sec.  Then, by running multiple jobs from multiple volumes I can use another core and get more aggregate throughput.  I don’t see any reason why I couldn’t hit 600 MB/sec nearly continuously with concurrency.  So, to put it simply, this one change would provide for 4 times the backup throughput on the same physical hardware vs the default queue depth of 1 for which I would get maybe 150 MB/sec average with wide variations.  If there happens to be a lot of other IO on the source array during my backup window, this setting would be even more beneficial.

  While at first glance it may appear this would only benefit high-bandwidth storage situations (such as my situation with source being SAS attached and network being 10 GbE), I believe it would benefit other lower-bandwidth storage situations as well, since it will tend to reduce IO waits at the source in all situations.  There is no reason to keep queue depth so short when the agent knows the next Gigabytes of requests that it will issue against some large vhd file snapshot. 

A Brief History of Backup

$
0
0
A 20-year Retrospective

"Study the past if you would define the future."
– Confucius

A lot has changed in the backup industry over the past two decades. In honor of World Backup Day, we thought it might be fun to take a look back and see how things have changed over that time. We produced this Infographic to give you the big picture, but read on if you want to know more of the details and how NetBackup and Backup Exec have played an essential role in shaping the backup industry as it exists today.

  • Three-tier architecture– NetBackup’s unique three tier architecture transformed the management of enterprise backup by allowing all policies and management tasks to be orchestrated from a single master server which then controlled a number of media servers that performed the actual data movement, backup, and restore operations. The result allowed a smaller IT staff to manage backup operations for a much larger IT environment.
  • Open file backup– It’s hard to imagine, but there was a time when you had to shut down your applications in order to back them up. In the mid-90s Backup Exec and NetBackup promoted a new open file backup technology based on file system snapshots that enabled live data to be backed up while in use, greatly simplifying the daily backup task and ensuring important files were not skipped. Backup administrators could sleep a lot better after that.
  • Email message-level restore– In the early days of Microsoft Exchange, email backups were database-only, so if you only wanted to restore a specific message you had to first recover the entire email database to an alternate location and then browse for the message you needed. To remedy this problem, Backup Exec pioneered the first generation of Granular Restore Technology, allowing a single email message to be selected and restored from the Backup Exec interface, forever changing how email is restored.
  • Shared storage over SCSI– Large tape libraries had become the norm in the mainframe environment, providing economies of scale for large enterprises. But the open systems environment still required storage to be tethered to a single server, resulting a many small islands of storage including smaller tape libraries and autoloaders, one per server. That was until NetBackup introduced a ground breaking technology that allowed multiple servers to share a single tape library over SCSI. Tape management was never the same after that.
  • Tape multiplexing– In the 90s tape drive performance had grown to the point where the bottleneck shifted to the network. Despite their fast performance, tape drives often sat idle waiting for data to travel over the network. NetBackup solved this problem with tape multiplexing, allowing multiple data streams from different sources to be interleaved into fewer, high performance data streams that could keep the tape drives spinning as fast as they could go. Backup windows were manageable again.
  • Network Data Management Protocol (NDMP)– Filers introduced the NDMP interface as a means of protecting filer data via remote control without agents. NetBackup quickly implemented its NDMP support and later extended its capabilities to allow several different backup methods. Today NDMP is the default, low cost method for protecting filer data.
  • Bare Metal Restore (BMR)– Originally introduced under the name Intelligent Disaster Recovery, the concept of Bare Metal Restore was pioneered by Backup Exec and later proliferated by NetBackup. With BMR technology, full system recoveries were automated and no longer required restore operators to know the details of configuring hardware or installing operating systems. Such automation improved recovery times and eliminated much of the human error common in high-pressure recovery situations.
  • Database Block Level Incremental backup (BLI) – Protecting very large databases (VLDB) has always pushed the limits of backup technology. NetBackup’s early solution to this problem was through tight integration with Veritas Storage Foundation storage checkpoint technology, allowing only block-level changes within databases to be copied while ignoring everything else. BLI backups of Oracle and DB2 could be performed in a fraction of the time, making VLDBs a lot less scary than they used to be to the backup team.
  • FlashBackup– Some of the largest applications at the time, such as the Human Genome project, were now hosting millions of files on a single system. Backing up these systems proved to be very slow as the file system became the primary bottleneck when the number of files reached a certain point. NetBackup designed a revolutionary solution to this problem, bypassing the file system with an image-style backup, but then performing a block map of the file system as part of the catalog post-process. The result was FlashBackup, an ideal way to quickly back up a massive file system without losing the ability to restore a single file.
  • Shared storage over a Fiber Channel SAN– In the 90s large enterprises were using very large tape libraries to protect their mainframes, but could not leverage these investments to protect their LAN systems. That all changed when storage area networks (SAN) were born. Backup Exec launched Shared Storage Option, the world’s first backup solution based on Fiber Channel SANs in partnership with Compaq (now HP), allowing those huge tape libraries to finally be shared by many systems connected via a SAN.
  • Hardware snapshot integration– NetBackup introduced its first generation of snapshot management integration with EMC Timefinder, enabling the NetBackup administrator to incorporate snapshots into their backup strategy. With this capability, NetBackup could orchestrate data synchronization and 3rd-mirror break-off backups on EMC Symmetrix arrays.
  • Vertex Initiative– In the shadow of the Internet Bubble, the convergence of backup and snapshot technology leaped forward with something called the Vertex Initiative, NetBackup’s second generation of snapshot integration. NetBackup delivered integration with several hardware snapshot technologies and allowed customers to perform nondisruptive, snapshot-assisted backups, including off-host and server-free backups.
  • Virtual Tape Libraries (VTL)– As the costs of disk began to compete with tape, the first wave of disk-based backup technologies were dominated by virtual tape libraries. While supporting VTLs was a straightforward qualification effort, NetBackup took it one step further and provided a level of integration that allowed VTLs to communicate with NetBackup so image duplication and data retention could be accomplished within the NetBackup policy framework.
  • Disk-to-Disk-to-Tape (D2D2T)– The next wave of disk-based backup began to incorporate standard JBOD systems into the mix as the first stage of a tiered backup architecture that combined disk and tape. NetBackup pushed the state-of-the-art forward with its flexible and advanced disk-based backup technologies, allowing disk capacity to be more easily managed using water marks and more logical operations that broke free of the limits of legacy tape methods.
  • Global data deduplication– The continued growth of disk as a backup target offered new opportunities to exploit the advantages of disk over tape. One of the most important innovations was deduplication technology which allowed data redundancies to be eliminated on a scale far beyond legacy compression technologies. NetBackup introduced its first generation of data deduplication in 2005, reducing storage requirements by as much as 90% or more. With this technology, disk finally became more cost-effective than tape as a primary backup target.
  • V-Ray– As VMware took the IT world by storm, NetBackup was first out of the gate to deliver truly advanced capability to this platform. Later dubbed “V-Ray,” NetBackup’s first generation of VMware integration did something nobody else could do: select and restore a single file from a VMDK-level backup. No longer would backup admins have to choose between VMDK image-level backups or VM guest-level backups. Cited by the analysts as one of NetBackup’s most distinctive and industry-leading capabilities, V-Ray technology helped NetBackup win Best of VMWorld in 2007, the first of many awards to follow.
  • OpenStorage Technology (OST) – The next wave of disk-based backup saw the proliferation of intelligent storage devices and appliances. Rather than put together a string of ad-hoc integrations with different vendors, NetBackup created an interface to allow these devices to “plug-in”, allowing NetBackup to directly leverage the intelligent capabilities of these devices. Launched as the OpenStorage Technology API, this interface has all but become a de facto standard in the backup industry.
  • NetBackup 5200 Integrated Backup Appliance– NetBackup decided to enter the backup appliance market with solution that combined software and hardware into a fully integrated platform. Since then the NetBackup Appliance has outpaced the growth of all other backup appliances in the market. Unlike so called target appliances, the NetBackup Appliance replaces storage, media servers, and storage interconnects, greatly simplifying the deployment and maintenance of NetBackup infrastructure.
  • Accelerator– The latest innovation to transform the backup world came with NetBackup 7.5. NetBackup combined several key technologies including optimized synthetics and changed block tracking to create a method of backup that permanently eliminates the need to perform full backups. When combined with deduplication technology, daily backup times can be reduced by an order of magnitude over traditional full backup methods. Time-consuming full backups are now a thing of the past.

Despite all that has changed over the past two decades, one thing hasn't: a reliable backup remains as the final solution when all else fails. No matter how technology evolves in the future, having a spare copy in your back pocket will always bail you out.

Solaris 11 Syntax to Update Client

$
0
0
I need a solution

What is the correct syntax to use to update the NB client on SunOS 5.11?

I tried Solaris Solaris11 and got the error below...

Cannot update client servername
        Invalid hardware type "Solaris" and/or operating system "Solaris11"
        or client binaries are not installed on the server.

When I tried it with Solaris Solaris10 I got this errorr...

ERROR:   Detected an attempt to install incorrect platform and/or
         operating system and version client binaries on servername.

Backups don't complete - hangs on Active: Running - Backup

$
0
0
I need a solution

Just upgraded from a working install of Backup Exec 2012 to BE2014 14.1 with all update installed. Since upgrading, I have not had a single successful backup. I have deployed the 2014 backup agent to my servers and rebooted them after the install. They now validate as having the latest version of the agent installed.

The backups will run and appear to backup all of the selections as expected. The issue is the backup job never ends. It just hangs up on "Backing up". Job status indicates "Active: Running - Backup". If I view the job activity, it doesn't seem to be working on any current file or directory. Judging by bytes and file count, I'm pretty sure everything has been backed up, but the job doesn't stop.

The job log xml file shows the following:

Job server: BACKUPSRV 
Job name: SRV23.DOMAIN.LOCAL Backup 00014-Full - End of Week Job started: Monday, March 30, 2015 at 4:40:37 PM 
Job type: Backup Job Log: BEX_BACKUPSRV_04279.xml Job 
Backup Method: Full   
Drive and media mount requested: 3/30/2015 4:40:39 PM  
Drive and media information from media mount: 3/30/2015 4:40:42 PM 
Drive Name: Synology DS512+ 
Media Label: B2D014498 
Media GUID: {6326d716-216c-493b-a5d8-4eeac38bbb6f}  
Job Operation - Backup Backup Set Retention Period: 3 Weeks.
Compression Type: None 
Encryption Type: None  

SRV23.DOMAIN.LOCAL
Backup Exec server is running Backup Exec version 14.1.1786.1103 with SP-2,HF-227745. 
Agent for Windows(SRV23.DOMAIN.LOCAL) is running Backup Exec version 14.1.1786.1103 with SP-2,HF-227745. 
Snapshot Technology: Started for resource: "\\SRV23.DOMAIN.LOCAL\C:". 
Snapshot technology used: Symantec Volume Snapshot Provider (VSP) for Windows Server 2000 only. 
Snapshot Technology: Symantec Volume Snapshot Provider (VSP) snapshot cache file settings for volume "C:" : 
Initial: 12376 MB. Maximum: 12376 MB. File: "\\?\Volume{dd143455-36c9-11dc-a162-806d6172696f}\Backup Exec AOFO Store\_BEVspCacheFile_1.VSP". 
Snapshot Technology: Symantec Volume Snapshot Provider (VSP) snapshot settings used: Quiet Time: 5 seconds. 
Time Out: 2000 seconds. 
Network control connection is established between 172.22.1.62:62236 <--> 172.22.1.53:10000 
Network data connection is established between 172.22.1.62:62331 <--> 172.22.1.53:3450 \\SRV23.DOMAIN.LOCAL\C: 
Family Name: "Media created 3/30/2015 4:40:39 PM" 
Backup of "\\SRV23.DOMAIN.LOCAL\C:" 
Backup set #1 on storage media #1 Backup set description: "" 
Backup Method: Full - Back up files (using modified time) Backup started on 3/30/2015 at 5:06:53 PM. 
Backup completed on 3/30/2015 at 5:11:08 PM. 
Backed up 11495 files in 1321 directories. 
Processed 1,137,817,834 bytes in 4 minutes and 15 seconds. 
Throughput rate: 255 MB/min 

Additionally, cancelling the backup manually (which requires killing the job engine process) results in the Job Status being "Cancelled". It also means the files from the backup do not show up in the restore list when attempting to restore.

I can provide additional logs as needed.

UEFI bios & GPT disk P2V failed with SSR 2013 R2

$
0
0
I need a solution

Hi

I tried to covert my physical server(X3650m4 + win2012) to virtual server(hyper-v) by SSR 2013 R2.  The covert process worked, but when the virtual server booted, it failed.

I wonder if SSR 2013 R2 support P2V for  the server with UEFI bios & GPT disk.  Thank you.

Data retention

$
0
0
No
I need a solution

What if I do incremental backup to tapeA (retention 2 days) then full backup to tapeB (keep forever)? Will this work? 

0
11000441

NetBackup v7.6.1 BMR Admin Guide - ports, table 6-13...

$
0
0

Hi - the table of network ports used by BMR, as listed in the NetBackup v7.6.1 BMR Admin Guide (i.e. table 6-13, on page 147/148), is not very clear.

Please can the manual be updated to include more detail, such as source and destination (i.e. either BMR client, or BMR server (BMR boot, or BMR master, or Master, or Media), and source and destinattion ports.

Also, the table lists that WIndows BMR Clients use ports 13724, and 13782 - but these are legacy comms ports - as opposed to port 1556 for Veritas PBX.  So can the table eithr be corrected, or amended to explicity state that port 1556 is not used - and thus explain why ports 13724 and 13782 are used.

Thank you.

Reclaiming de-duplication space manually

$
0
0
I need a solution

Running all my backup jobs to a de-dupliation disk with a retention period of 5 weeks.  When I look at the details, I see lots of expired backup sets that aren't being automatically cleared up by the system, quite a few that are past the 5 week retention period.  The backup sets that are listed should have all been duplicated by a later job so they aren't the only copy.  My question is can I go in and right-click on each expired backup set, select Expire and reclaim the disk space listed under the size column?  Or does this mess with any catalogs that are maintained by the de-duplication process?  Have several TB that I would like to recover that aren't being released.  Thanks.

PS:  I do not have "allow Backup Exec to delete all expired backup sets" checked.

Deduplication.JPG

Backup Exec 2014, Sharepoint Hyper-V VM backups w/SQL server 2008 R2

$
0
0
I need a solution

Hi,

I have a Windows 2012 server that hosts 2 Hyper-V VMs. One VM is a Windows 2008R2 server running IIS, the other is a Windows 2008R2 server running Sharepoint 2010 Foundation (in farm mode) with SQL server 2008 R2 Express. I know thats not the recommended way to run Sharepoint, but I'm budget/resource constrained.

I have Backup Exec 2014 (with SP2) running on the host in Windows 2012, with the Agent for Hyper-V and the Agent for Sharepoint (Apps & DBs) installed, as well as thre BE Remote Agent installed in both VMs.

I have an LTO3 tape drive attached to the host for backup.

I want to backup the host machine, the web server VM, and Sharepoint.  I know I need to backup the sharepoint server farm, but what about the Sharepoint VM itself? Should I select the Vm and exclude Sharepoint within the VM selection?

Thanks!

1427821604

Slow performance on FlashBackup Windows restore

$
0
0
I need a solution

Hello Everybody!!

I am running backups on a physical server windows client server with a lot of little files on three filesystem drive. The backup is configured as flashbackup because the backup window is restricted and the amount of information is 3TB per filesystem.

The backup is running with multistream. During backup the throughput is 45MB/sec. 

When restoring the throughput is 3MB/s. I have setted up the NUMBER_DATA_BUFFERS, NUMBER_DATA_BUFFERS_RESTORE,  SIZE_DATA_BUFFERS, NET_BUFFER_SZ. There is no performance improvement.

Also the activity monitor does not reflect any change. The bptm log shows io_init: buffer size for read is 262144.

BACKUP HELP

$
0
0
I need a solution

Our organization had at least 10 servers, with no past history of VSS writer issues, fail due to writers crashing since upgrading to 2014. Anyone else having similar issues?
BE support is of course useless and just tells you to contact microsoft, since it's their writers they're crashing.

1427826832
Viewing all 6339 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>