On page load , 1st div which is "div1" is enabled , rest are hidden . On further link clicks , respective divs , as defined by class of list element , are enabled , rest divs are disabled
jQuery(function () {
$('#nav a').click(function () {
var myClassLinked = $(this).parent().attr('class');
This is a very common scenario where , our deployment guides tend to be useless with automation just because features are activated well using UI in settings of site/site collection , but fail using powershell.
Here is the solution to this :
using (SPWeb web = properties.Feature.Parent as SPWeb) { SPWebApplication webApp = web.Site.WebApplication; Configuration config = WebConfigurationManager.OpenWebConfiguration("/", webApp.Name); // App settings are retrieved this way config.AppSettings.Settings["someAppSettingKey"].Value; // Connection string settings are retrieved this way config.ConnectionStrings.ConnectionStrings["someDBConnectionString"].ConnectionString web.Site.Dispose(); }
readStream = new StreamReader(receiveStream, Encoding.UTF8);
string pageContent = readStream.ReadToEnd();
Now out of this string pageContent , I want to extract all sub request and make them a hit. Is there something better than html agility pack for this ?
Reply 1 By http://social.msdn.microsoft.com/profile/joel%20engineer/?ws=usercard-mini
I usually read it into an htmlDocument Class and the use either GetElementByID or GetElementsByTagname()
string pageContent;
webBrowser1.Document.Write(pageContent);
jdweng
Reply 2
You should try http://htmlagilitypack.codeplex.com/
For my Document Library WorkflowAssociations.Count is zero.
For this document Library EnableModeration is false.
Still I can see column in the document Library named "Approval Status". (internal name "_ModerationStatus")
When I try to set EnableModeration true for this document library , I get exception.
Seems to be some time in the past workflow was enabled on this list / enablemoderation was true . But setting these to negative did not remove Approval Status column.
How I can fix this document library without deletion , so that future workflows enabled or enablemoderation=true should not give exception ?
• On web application management ( e.g. http://ServerName:CentralAdminPort/_admin/WebApplicationList.aspx) page under central admin , go to Permission Policy :
• Mark the check box which says deny users to view application pages and save.
• Output is :
• Now go to user policies tab for main web application and click on add user :
• Select the zone on which public main application is deployed :
• Under people picker type "ASP.NET Membership provider name" , Permissions : Deny System Pages and click finish :
• Output is like :
Now access few application pages and lists directly to verify with a form authentication user .
Alternative approach and suggestions are welcome under comments section.
On page http://ServerName/_Layouts/VariationLogs.aspx , when I ran Timer Job "Variations Create Hierarchies Job Definition" , I was getting the error ( Under Failures column) :
The Variations Create Hierarchies job failed with the following error message: <nativehr>0x80070057</nativehr><nativestack></nativestack>.
I was getting below mentioned errors in ULS logs :
Process : OWSTimer.exe Area : Web Content Management Category : Site Management Level : Unexpected
Process : OWSTimer.exe Area : Web Content Management Category : Site Management Level : Unexpected
PublishingWeb::CreateVariationsDeepForCurrentPages caught Exception at item 'Pages/Folder1/Folder2/PageName.aspx' with : <nativehr>0x80070057</nativehr><nativestack></nativestack>
Process : OWSTimer.exe Area : Web Content Management Category : Publishing Level : Monitorable
GetFileFromUrl: ArgumentException when attempting get file Url http://OldServerName/_catalogs/masterpage/PageLayoutFileName.aspx <nativehr>0x80070057</nativehr><nativestack></nativestack> at Microsoft.SharePoint.Library.SPRequestInternalClass.GetMetadataForUrl(String bstrUrl, Int32 METADATAFLAGS, Guid& pgListId, Int32& plItemId, Int32& plType, Object& pvarFileOrFolder) at Microsoft.SharePoint.Library.SPRequest.GetMetadataForUrl(String bstrUrl, Int32 METADATAFLAGS, Guid& pgListId, Int32& plItemId, Int32& plType, Object& pvarFileOrFolder) at Microsoft.SharePoint.SPWeb.GetFileOrFolderObject(String strUrl) at Microsoft.SharePoint.Publishing.CommonUtilities.GetFileFromUrl(String url, SPWeb web)
Process : OWSTimer.exe Area : SharePoint Foundation Category : General Level : High
Process : OWSTimer.exe Area : Web Content Management Category : Site Management Level : Unexpected
PublishingWeb::CreateVariationsDeepForChildPages() caught Exception at web 'http://NewServerName/es-LA/Subsite1' with : <nativehr>0x80070057</nativehr><nativestack></nativestack>
Process : OWSTimer.exe Area : Web Content Management Category : Site Management Level : Unexpected
DeploymentWrapper::CreateVariantPage() on sourcePage = 'Pages/Folder1/Folder2/PageName.aspx', targetWeb = 'http://NewServerName/es-LA/Subsite1' catches an unexpected exception: System.ArgumentException: <nativehr>0x80070057</nativehr><nativestack></nativestack> at Microsoft.SharePoint.Library.SPRequestInternalClass.GetMetadataForUrl(String bstrUrl, Int32 METADATAFLAGS, Guid& pgListId, Int32& plItemId, Int32& plType, Object& pvarFileOrFolder) at Microsoft.SharePoint.Library.SPRequest.GetMetadataForUrl(String bstrUrl, Int32 METADATAFLAGS, Guid& pgListId, Int32& plItemId, Int32& plType, Object& pvarFileOrFolder) at Microsoft.SharePoint.SPWeb.GetFileOrFolderObject(String strUrl) at Microsoft.SharePoint.Publishing.CommonUtilities.GetFileFromUrl(String url, SPWeb web) at Microsoft.SharePoint.Publishing.PublishingPage.get_Layout() at Microsoft.SharePoint.Publishing.PublishingPage.set_Layout(PageLayout value) at Microsoft.SharePoint.Publishing.Internal.DeploymentWrapper.CreateVariantPage(PublishingPage sourcePage, PublishingWeb targetArea, String destPageWebRelativeUrl, String pageLayoutName, String title, String description, Boolean copyResources, Boolean enforceMajorVersion, SPWeb& webToClose, PageLayout[] targetAreaAvailablePageLayouts).
Possible Reason :
1. You have recently moved your content db from an old server , the display names for page layouts seems to be updated ( even on mouse hover) but internally older server name is being referred. 2. Even after updating page layout via object model , publishing cache is not flushed. Possible Resolution:
1.
For all the Publishing pages under source variation , update property "PublishingPageLayout" to http://NewServerName/_catalogs/masterpage/PageLayoutFileName.aspx [ old value http://OldServerName/_catalogs/masterpage/PageLayoutFileName.aspx ] page.ListItem.Properties["PublishingPageLayout"] = newPageLayout;
2. Flush object cache at http://ServerName/_Layouts/objectcachesettings.aspx 3. IISRESET 4. Restart owstimer.exe 5. Under http://ServerName/_Layouts/VariationLabels.aspx again schedule create Hierarchies. 6. Have a cup of tea. 7. Start the Timerjob 'Variations Create Hierarchies Job Definition’ manually or wait for the schedule. 8. Keep an eye on http://ServerName/_Layouts/VariationLogs.aspx
Except Chinese version I am able to access Pages Library Like :
web.Lists["Pages"] or web.Lists.TryGetList("Pages") .
But for Chinese sub site :
If I Iterate web.Lists collection using foreach loop , there is a list present whose SPList.Title is "Pages" . For this list Under SPList.SchemaXml , Title is Title="頁面"
web.Lists["Pages"] gives error: List 'Pages' does not exist at site with URL 'XXXXXXXXXXXXXXXXXXXXXXXXX' AND web.Lists.TryGetList("Pages") gives null .
But web.Lists["頁面"] returns the right SPList object
Is it like , SharePoint API's to retrieve a list by name refer to SchemaXml rather than SPList.Title , while retrieving a list internally ?
Reply 1 by http://social.technet.microsoft.com/profile/ivan%20vagunin/?ws=usercard-mini
Hi!
I would propose to use SPWeb.GetList function - it returns list by url (eg web.GetList("/myweb/Pages") and url usually the same for all locales.
We have a staging source and target production. For 1st migration on blank site collection :
1. Even if we migrate content to a blank site collections we get many warnings like :
a.
[WebFeature] [Publishing] Warning: Provisioning did not succeed. Details: Failed to create the 'Images' library. OriginalException: The feature failed to activate because a list at 'PublishingImages' already exists in this site. Delete or rename the list and try activating the feature again.
b.
Warning: Provisioning did not succeed. Details: Failed to create the 'Images' library. OriginalException: The feature failed to activate because a list at 'PublishingImages' already exists in this site. Delete or rename the list and try activating the feature again.
c.
Warning: User or group 19 cannot be resolved.
Can we use a backup of staging as target instead of blank . What else than initial migration failure should I aspect as drawback in this ?
2.
Content deployment deploys the most recent major and minor versions of a content item. For example, if version 2.7 of a Web page is being deployed, the most recent major version (2.0) of the page, and the most recent minor version (2.7), will be deployed to the destination site.
Is it possible to limit the migration to last major version only ?
We have started with blank site collection using powershell. Still getting the error in point one.
I refer to http://centralAdmin/_admin/Deployment.aspx in this problem statements above.
In the section about the requirements for a successfuly content deployment, he recommends using an empty site collection for the destination and not a site collection with the Blank Site template (STS#1):
3) Use an empty site collection as the destination of your content deployment job
As already discussed in Part 5 content deployment will fail if the destination database contains conflicting content. To avoid this it is required that the initial deployment is done into an empty site collection.
Be aware that the only way to create an empty site collection is to use the following STSADM command:
Using the "Blank Site" template will NOT create an empty site collection! It will actually create a site collection with content. You can see the difference if you create a site collection using both methods and then inspect the content of the created sites using SharePoint designer.
I personally recommed to always use the STSADM command with the syntax above to ensure that you really have an empty site collection as destination.
Impact: If the site collection has been created using a different method or already contains data the content deployment job will fail.
How to resolve: Deploy into an empty site collection
For your second question, content deployment will deploy the latest published version. If you don't want a minor version to be deployed (it's a draft, perhaps), simply do not publish it.
Reply 2 By http://social.technet.microsoft.com/profile/neal%20mcfee%20%5Bmct%5D/?ws=usercard-mini
To create a site collection that does not have a template (an empty site template) you omit the -template parameter in the New-SPSite cmdlet.
If you create a blank site using the STS#0, this will not work for the task you are trying to complete.
Try deleting the target site collection and recreating it using New-SPSite but do not specify the template.
Then re-run your publishing job.
If this helps you then please mark the post as helpful. If this answers your question then mark it as the answer. If another contributor in the thread answers your question then please do the right thing. And as always most answers for SharePoint are based on "It depends"
variationsfixuptool is used to correct variations system data
on publishing sites or pages.If you want to analyze the actual site structure
and the data stored in the relationships list this tool may be good one.
Syntax
stsadm -o variationsfixuptool
-url <source variation site URL>
[-scan]
[-recurse]
[-label]
[-fix]
[-spawn]
[-showrunningjobs]
Parameters
Parameter name
Value
Required?
Description
url
A valid URL, such as http://server_name
Yes
The URL of a site in source variation where variations system data is being analyzed or corrected.
scan
<none>
No
Analyzes the variations hierarchy and report findings.
This parameter provides functionality that cannot be accessed using the Central Administration Web site.
For each Site/Page, it reports:
If the source site is marked as being in the source variation hierarchy (SPWeb.AllProperties["__InSourceHierarchy"] == True ?). If the site is marked as being in the source hierarchy means that the page is part of the source hierarchy.
The Variation Group Id of the source site or page (SPWeb.AllProperties["Variation Group Id"] or PublishingPage.ListItem[FieldId.VariationGroupId])
Which peers are registered in the relationships list with same Variation Group Id
If a variation peer exists in the configured labels (default is all spawned labels) by first checking if a peer is configured in the relationships list with the same variation group ID for the given label. If no peer can be found using the relationships list we try to lookup the peer using the default URL that would be used when creating a variant in the given label
The variation group id of the variation peers in the target labels (SPWeb.AllProperties["Variation Group Id"] or PublishingPage.ListItem[FieldId.VariationGroupId])
If the source site/page is configured in the relationships list
In addition the command reports the variation labels used for the peer check (default is all spawned labels).
The tool will not identify any issues - it is required to analyze the report in detail and interpret it in order to find problems.
recurse
<none>
No
Scan or fix all subsites of the site specified by the url parameter.
label
A valid label name, such as "English"
No
Name of the label of the variation target.
fix
<none>
No
Corrects invalid variations system data that are found. If the recurse parameter is used, fixes are done
recursively for all subsites.
This parameter provides functionality that cannot be accessed using the Central Administration Web site.Fix mode will not create missing variation peers.
The following issues are automatically fixed:
Missing Variation Group ID on source site or source page
Missing relationships list entry for source or target page or site
Missing Variation Group ID on target site or target page
Different Variation Group ID in source site/page and target site/page (source ID will win)
spawn
<none>
No
Creates new site variations on the source variation site specified by the url parameter for all target variations labels. If the recurse parameter is used, variations for subsites and pages are also created.
This parameter equivalent to the New Variation Site user interface setting that is located on the Site Content and Structure page.
This operation mode cannot be used to spawn pages in already spawned sites.
The command initiates the site spawn operation by creating a scheduled work item for the SpawnSiteJobDefinition timerjob. The spawn will be performed as soon as the timerjob runs.
The difference between the scheduled work item created by the CPVAreaEventReceiver and the Variationsfixuptool is that the tool has the option to create a workitem, which only spawns the site to one specific variation rather than to all spawned variation labels.
A common use scenario is to configure the source variation label with the Hierarchy Creation option to create only the root site and not the complete hierarchy and later to spawn the hierarchy or parts of the hierarchy from the source label using the STSADM command.
In MOSS 2007, this command was a major benefit compared to the automatic hierarchy creation in the UI, as it was the only way to force the creation in OWSTIMER rather than W3WP.
With SharePoint 2010 where all actions are already in OWSTIMER the benefit of using the STSADM command rather than the UI is rather limited.
showrunningjobs
<none>
No
Displays current status of Variations Propagate Page Job Definition and Variations Propagate Site Job Definition timer jobs that are located on the Timer Job Status page of the SharePoint Central Administration Web site. But this does not provide information about Variations Create Hierarchies Job Definition , Variations Create Page Job Definition and Variations Create Site Job Definition.You only get to know about Variations Propagate Page Job Definition and Variations Propagate Site Job Definition.
under the "System" Group > "Keywords" Termset I can see various terms which are named as various combination of digits like :
1,2,3,4,5,6
1,2,3,6,4,5
1,2,5,6,4,3
1,3,4,2,5,6
1,3,5,2,4,6
1,3,6,2,4,5
1,4,2,3,5,6
1,4,3,6,2,5
1,5,2,3,4,6
1,5,2,3,6,4
1,6,2,3,4,5
If you know why they are there , kindly share this with all of us.Please explain what is the purpose of these digits ( Enterprise Keywords) , I we loose few of them , what may be the consequences.
The source of the question is , one of our servers is having few extra of them like : 18,19,20,21,22,23
Reply 1
Hello,
I checked and I wasn't able to reproduce this issue by Content Deployment between 2 SharePoint environments so this seems like an environment specific issue; in-depth engagement is not feasible thru forum replies and this issue requires a more in-depth level of support. Please visit the below link to see the various paid support options that are available to better meet your needs:http://support.microsoft.com/default.aspx?id=fh;en-us;offerprophone. If you are a MSDN / TechNet subscriber, you can also contact our support by using your free support incidents.
However, other members of the community may still have encountered the issue you're seeing, and have a solution to offer!
The keywords visible under "System" group are added by tagging documents / list items with predefined (taxonomy) and user defined (folksonomy). There are no pre-populated digit keywords so I do not believe there would be impact other than tagged items losing these tags.
if staging has a dummy term and corresponding TaxonomyHiddenList entry . The Content deployment /migration from staging to production create such type of terms.
What Microsoft can do to fix it :
1. Suppose staging has a term (myGuid) under Termset1(Guid1), under Group1 ( Guid2) , under Store 1 ( Guid3 ); and while migration of site collection it finds a term in taxonomyhiddenlist which is missing on target >
2. It should search for Guid1 under GUID2 under GUID 3 to create a term with myGuid.
As of now under current implementation by Microsoft for migration API's it seems to be creating entry under Group System Termset Keywords.
Microsoft.SharePoint.Taxonomy.dll version 14.0.4756.1000
We migrate only published and approved content from staging to production. Under one of the scenarios, we want to introduce new Terms for our system. Here is the strange behavior we are facing that might be reported to Microsoft for improvements in future service packs:
Create new term on staging under desired termset.
Reproduce same term with same Guid and parent term set on production :
By using TermSetItem.CreateTerm Method (String, Int32, Guid)
By stopping meta dataservice and then replace db on production from staging
By using Export-SPMetadataWebServicePartitionData,Export-SPMetadataWebServicePartitionData powershell
Create unpublished SPItem on staging which refer the new term.
Let the content deployment proceed. This will migrate only the Term entry in TaxonomyHiddenList.
Publish any spDoc which refers few of the old and new terms.
Let the content deployment migrate the new published item.
Now on production the newly migrated spDoc has only new term referred.
If we migrate spDoc which only has old terms , not the new one, it migrates well and all the terms are visible under the display item form > SPField.
How you could help me in this:
I got tp_ListId and tp_DocId for my target document migrated from [AllDocs] database Tableusing LeafName.
Using tp_ListId and tp_DocId on [AllUserData] I was able to observe that, actully content deployment has created entry correctly here.
The entry is like under ntext2 is like:“OldTerm1|OldTerm1GUID;NewTerm1|NewTerm1GUID;OLDTerm2|OldTerm2GUID”
Using all SharePoint API’s and U2U etc, my migrated item returns only the new term , it seems to be old terms are wiped off , but they are not , they are there in the database.
The output of U2U is:
NewTerm1|NewTerm1GUID
Now I have a question for you, what else than Taxonomy Update Scheduler Timer Job could be the culprit? Is it like the account with which SharePoint Timer Job is running , must have full control on Meta Data DB and Site Collection DB ?
How Microsoft should help us in this:
For a new term, when referred by an item, it should be handled by the migration /deployment API’s exposed internally that:
A new Term should be created under the same termeset as on source. Right now it tends to create under System > Keyword Terms set if term is missing. ( if I miss step 2 above)
No multiple entries should be allowed in TaxonomyHiddenList with same title /GUID, name parent termset etc. If we migrate an item which is published under ste 4 above. 2 entries are created under TaxonomyHiddenList.
Taxonomy Update Scheduler should be made efficient to handle multiple entries for same term in the taxonomyhiddenlist by the external processes.
Reply 1
A.
After doing all the above steps once :
So the Taxonomy Hidden List has all the terms as desired , the Term Store is up to date , the SQL content DB has the entry as it should be.
Means Taxonomy Update Scheduler is not able to update the content DB , so that it may display the right values.
Do we need some kind of Service pack here ? We have :
Microsoft.SharePoint.dll version 14.0.4762.1000
Microsoft.SharePoint.Taxonomy.dll version 14.0.4756.1000
I tried TaxonomySession.SyncHiddenList(mySiteCollection); but it could not help .
B.
If I run all my Meta service , web application pool and SharePoint Timer Job with an "Admin Everywhere" account on production replica and follow the steps above along with TaxonomySession.SyncHiddenList(mySiteCollection); still it does not help.
C.
If I run all the app pools and windows services on the server using an account who has admin rights everywhere along with TaxonomySession.SyncHiddenList(mySiteCollection); it helps , I had to insert SyncHiddenList along with manual run of Taxonomy Migration Job in b/w step 4 and 5 above. i.e after the Taxonomy Hidden List item migration and before the actual content come in .
Is there some shorter way to avoid all the mess above said ? Or at least you could point out what else than exactly my Meta service app pool , web application pool and SharePoint Timer Job is involved in Taxonomies !!!
Reply 2
Temporary Solution: Everywhere on msdn and blogs, technology geeks have suggested making taxonomy store to be common for staging and production environments. But in our case we cannot maintain this. So with our version of SharePoint (Microsoft.SharePoint.dll version 14.0.4762.1000, Microsoft.SharePoint.Taxonomy.dll version 14.0.4756.1000 ) we have tested below mentioned to work as an alternative:
0. On staging and production go to Central Admin > Security > Configure Service Accounts
a. Select Farm Account and press OK, the account you had decided to be farm account previously will be getting full control at many places in the DB , if the DB's are backup and restore off the line , generally we loose this important couple b/w farm account and Database. It will make sure the timer service which runs with Farm account can do many stuff on the whole farm without any errors.
b. do similar to above for app pool account for managed meta service and your target web applications.
1. Run export import for the staging and production to be in sync. Halt the changes on staging. 2. Create new term on staging under desired termset. Extract the GUID for this new term and keep safe with you 3. Create unpublished SPItem on staging which refer the new term.( To create an entry in taxonomy hiddenlist) 4. TaxonomySession.SyncHiddenList(mySiteCollectionStaging).Before Sync Please make sure your SharePoint Timer Job is running with an account which has full rights on site collection and the taxonomy data base. This can be done by setting admin for site collection and taxonomy service.Please make sure after this you wait for "Enterprise Metadata site data update" and "Taxonomy Update Scheduler" to run once as scheduled.
5. Delete the unpublished item of step 2 if you wish to .
6. Run export process and discard this export package. Because this package contains all the terms, since they are updated by sync process. A. Reproduce same term with same Guid and parent term set on production: By using TermSetItem.CreateTerm Method (String, Int32, Guid) Take GUID and name from step two above. B. Create an unpublished dummy item on production with new term. C. TaxonomySession.SyncHiddenList(mySiteCollectionProd).Please make sure after this you wait for "Enterprise Metadata site data update" and "Taxonomy Update Scheduler" to run once as scheduled. D. Delete the unpublished item if you wish to.
Now servers are ready for future normal import and exports without any error till the time you don’t introduce new terms.
Reply 3
Permanent Solution:
How Microsoft should help us in this:
1. Make the content deployment API's smart enough , that source Taxonomy hidden list is not marked for migration while exporting site collection , when the SPtem which refer to a term is created on Target , the right ( in sync) Term will automatically be created under target taxonomy hidden list.
2. In case Target term store is not updated , there should be explicit messages in import log , that a term is referred which may have been missing in the target term store.Import is unsuccessful.Please update your term store and run the import again. Or let the SharePoint Timer job create one at right place and give error if parent term set is missing
I want to display them in a SharePoint Grid ( custom) . If the count of profile were only 50 , to display them all my dev machine takes around 40 seconds with all custom processing I have.
So, I want page size say 20 , and on each next click I should get from profile DB next 20 records. I should be able to get page numbers also in the bottom. For direct navigation.
Please suggest if you have something in your mind / you done it already in past.
I am gonn rely on Microsoft.Office.Server.UserProfiles.ProfileManagerBase.GetEnumerator for this .
with http://msdn.microsoft.com/en-us/library/ee581591.aspx under microsoft.Office.Server.UserProfiles
we can atleast implement next,previous under pagination. But looking for page numbers with last and 1st page buttons.
Anyways we are targeting membership provider as base for pagination, which has public abstract MembershipUserCollection GetAllUsers(int pageIndex, int pageSize, out int totalRecords);. after that for each page fetch the records from User profiles.
Few of our projects are using SharePoint Content Deployment Wizard by chrisobrien since old days of MOSS 2007. And others use Central Admin Content Deployment Jobs and Path by Microsoft .
Microsoft keeps on updating there out of box feature in SharePoint as usual.
I am starting this discussion for folks here to encourage using the best that could be. I would request everyone to share there views Like :--
I prefer option 1 because it is something for which I can go back to Microsoft and request for support. I get almost all feature in option 1 , which option 2 gives.
I prefer option 2 for level of customization it allows with custom API as wrapper , I can control the whole migration process . Option 1 is kind of Black box for us.
You are welcome to share links where the tech world have already such threads available which gives this comparison handy like SharePoint Content Deployment Wizard
Are there some alternatives if we don’t want to upgrade from Microsoft SharePoint Server 2010 (14.0.4763.1000), still we don’t face below mentioned issues:
An incremental content deployment of a package in SharePoint Server 2010 fails if the following conditions are true:
The package contains a renamed site.
A sub-site contains a link to the renamed site.
Additionally, you receive the following error message:
Assume that you perform an incremental content deployment on a destination SharePoint Server 2010 farm. In this situation, the changes for the alternative language settings on the source farm are not reflected on the destination farm.
When you use the Managed Metadata columns together with document sets in SharePoint Foundation 2010, you cannot perform a content deployment successfully. Additionally, you receive the following error message:
FatalError: Specified data type does not match the current data type of the property.
There was a time when you had to restart a lot of stuff after editing registry entries like : KEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Shared Tools\Web Server Extensions\HeapSettings SPRequestStackTrace = 1
to get the root cause of memory leaks and all that stuff.
Now you have property SPWebService.CollectSPRequestAllocationCallStacks . This is a diagnostic setting that facilitates the debugging of leaks of SPSite and SPWeb objects. If not disposed properly these objects can hold onto large amounts of memory. By enabling this setting, traces in the trace log that report leaks will contain callstacks. However, callstack collection is expensive so this should only be enabled during a brief diagnostic period.
I have server1 with Microsoft.SharePoint.DLL version 14.0.5123.5000 in farm1 .
I took Content DB backup of Site collection SC1 on this server 1 along with Taxonomy DB backup on server 1 .
I restored these DB's on Server2 ( Microsoft.SharePoint.DLL version 14.0.4762.1000) SQL DB .
Created a new Taxonomy Meta Service with restored MetaService DB backup on Farm2.
Created a new web application and site collection with restored Content DB from Server1 on Farm2.
Manually run the Taxonomy Update Scheduler Timer Job.
When I create a new item/ edit an on Server 2, a new entry is created in TaxonomyHiddenList and a new entry under termStore "System".
Even though I have same term present in my custom Termstore in a termset !!!!
The ideal behavior should be , on server 2 , new item should be tagged to my custom term in predefined termset and termstore .
Please guide me what step I am missing.
Reply 1
Let's try a different approach. Instead of trying to create the service from a backup, we can export the data and import it into a new metadata service:
Delete the Managed Metadata service in Farm 2.
Create a new Managed Metadata service in Farm 2. Note the application pool account.
In farm 1, from an elevated SharePoint Management Console window, run the Export-SPMetadataWebServicePartitionData cmdlet. This will create export the managed metadata data into a cabinet file (.CAB):
Export-SPMetadataWebServicePartitionData -Identity "http://sharepointsite" -ServiceProxy "ManagedMetadata Service Proxy Name" -Path "C:\Temp\ManagedMetadata.cab"
opy the .CAB file from farm 1 to a folder on the farm 2 SQL Server. Share this folder and give Everyone Full Control sharing permission (we will clean this up later)
In farm 2 SQL Server Management Studio, give the Managed Metadata application pool account (from step 2) bulkadmin SQL Server role (Expand the server instance -> Expand Security -> Find the account -> Right click, Properties -> Server Roles Page -> Check bulkadmin -> Click OK)
Export-SPMetadataWebServicePartitionData -Identity "http://sharepointsitefarm2" -ServiceProxy "New Managed Metadata Service Proxy Name" -Path "\\SQLServerName\Share\ManagedMetadata.cab"
You'll need to update the URLs and names to reflect your environments.
JASON WARREN
Reply 2
Hello Jason
Thanks for your great help. With the migration commands you shared also , I was getting the same problem.
The source of the problem in my case was :
After recreation / DB restoration /migration of Taxonomies , few of the properties in my User Profiles had lost the mappings, I had to manually remap them to get the things working.
Reply 3
After everything I am facing below mentioned: ( To sum up I will rewrite all steps I did)\
I have server1 with Microsoft.SharePoint.DLL version 14.0.5123.5000 in farm1 .On Farm 2 Microsoft.SharePoint.DLL version is 14.0.4762.1000
The Farm1 and Farm2 are exact replica ( created from same content DB) and Taxonomies also being same with same GUIDs of Term Store and sets and Terms.
Edited an existing content in Site Collection 1 to refer to the new term.
Now I used Deployment API’s to take incremental export of site collection 1 .
The import process on Site Collection2 on farm 2 create two entries in TaxonomyHiddenList !! These two entries have same Title, IdForTermStore, IdForTerm, IdForTermSet etc . Only difference is the SPITEM ID .
All the functionalities of my site work fine. But I am not sure why these two entries should be there in TaxonomyHiddenList !!!
After each step above I run the Taxonomy Update Scheduler Timer Job on Site collection 2 web application. Manual TaxonomySession.SyncHiddenList also did not help.
Seems to be one entry in TaxonomyHiddenList came from content migration of TaxonomyHiddenList itself and the second entry came as include dependency of list item edited which refer to this new Term. Might be the case ?
As the import Log mention The import Process on Site Collection2 refer to TaxonomyHiddenList Twice :
My migration process export only the content which is published . If I give a tweak in the process it works:
1. Create an item which is not published using the new term on server 1 . This populates an entry in TaxonomyHiddenList. Let this Term in TaxonomyHiddenList migrate in next export import migration.
2. Now before next migration cycle , Publish the item which was refering to new term on server 1 . After next migration , the updated item goes to server 2 and there is no extra term in TaxonomyHiddenList onserver 2.
Reason: Probably import process on Server 2 is taken care by SharePoint as a Transaction. Now TaxonomyHiddenList Import and refering item import is under single transaction. So the item does not get reference to the newly created / imported taxonomy hidden list entry and hence trigger a new spitem creation in the taxonomyhidden list.
If we don't want to use powershell for Taxonomy migration and want to stick to db migration, there is a way out, restart SQL server just before db restore on target , so that there may be no connection Live to the managed db at the time of restore.
If we try to point the old service to a new DB instead(on the target,where new db is restored from the source meta db), some mappings may be lost for the termsets to the Columns in the list.
ExportChangeToken property in SPExportSettings is something which tells SharePoint API's a point in past from which the incremental export should start .
CurrentChangeToken (ReadOnly) is something which is set internally , and referred by future exports as a mile stone for export process.This property is set the when some change occurs in site or when the increamental batchis complete ?
I have a batch which runs perfectly under ideal conditions.
I want that when I run this batch manually , value of CurrentChangeToken should not be persisted in the system,
so that automatic batch process keeps on running as is & my manual run is free enough to export without any affect on the automatic.
Please help.
1. Is it ok to create my own persisted object store of SPChangeToken , let the Batch use it to get last value and add new token there with CurrentChangeToken value. The manual process do not update the custom persisted object , so last in the persisted object is the one by automatic process.
2.
I have one last resort for me , if I don't find direct API's in Deployment name spaces : SPSite.GetChanges Method (SPChangeToken, SPChangeToken)
In this case I want to know Microsoft.SharePoint.Deployment.SPExportSettings.CurrentChangeToken (ReadOnly) property is set internally after the export has run or it is independent of when export runs , but dependent on when actual change occurs.
Did you ever have a chance to work on Login control for SharePoint which sends user’s credentials on SSL and rest of the stuff non-secured? I found few of the good links which suggest creating our own custom Cookie handlers.
Microsoft SharePoint Team has written their own cookie handler specific to SharePoint which does not allow authentication token being generated on a secured connection to be used under non-secured. 1st thing, Do you see any kind of harm in overriding this behavior of SharePoint’s cookie handler with our own custom one ? Second,I was able to transfer back the cookies using this approach as in the links below, but current context was lost, and user remained to be logged off !!!
1. Extended the current setup on SSL . Made it working similar to the default zone application ( including resource files and web.config entries,manual dll etc.)
2. Set the postback url for login button.
3. Made the URL rewrite entries as suggested in both zones. ( these are spread across multiple links, REQUEST_METHOD is additional to https ON rule) URL rewriting can be done very easily with an IIS extension provided by Microsoft.
4. Created the custom handler as suggested and replaced it with SharePoint one.
5. Set the required SSL for cookies to false, so that they may be used on non ssl also.
Reply 1
The two bindings should be given under same zone , only then it works. I was giving the alternate access mappings under different zone.
So with custom cookie handler with both end point under same zone , it works! !
Reply 2
Hello hemantrhtk
Thank you for your post.
This is a quick note to let you know that we are performing research on this issue.
Thanks,
Pengyu Zhao
Reply 3
The two bindings should be given under same zone , only then it works. I was giving the alternate access mappings under different zone.
So with custom cookie handler with both end point under same zone , it works! !
I have a web application claim based with ntlm and sql provider ( mix mode authentication). The default login page of SharePoint 2010 , allows me to login the site on Mozilla 10 and Google crome,
but on IE 8 and 9 , it does not redirect me anywhere after windows authentication pop up, nor the user is able to access the application pages by typing url directly ( means there is a problem in authentication rather than redirection only)
I have tried allowing all cookies and adding the site in trusted zone on IE 8 and 9 , but no luck
I had given a hostheader different to the server name, when I revert it back to server name/blank, I was able to use default SharePoint login page perfectly fine.
This is when hostheader had a DNS entry also in place !!!
Reply1
Make sure javascript is not disabled. A next step could be to install Fiddler and check which requests/responses are sent over.
Reply 2
Are you forced to actually enter a username and password using IE? This might imply a kerberos misconfiguration?
Reply 3
Hi,
Have you checked Remember me in the login page?
If yes, please try to do some changes in web.config file:
<cookieHandler mode="Custom" path="/" > changed it to:
We are facing an issue to connect my SharePoint 2010 application with InfoPath Designer.
The site collection for which we are getting this error is migrated from MOSS 2007 to SharePoint 2010.For other applications which are created in SharePoint 2010 itself (no migration) , everything works fine.
While we are trying to connect to migrated application ,we are getting error :
‘This feature requires SharePoint 2010 or greater with InfoPath Forms Services enabled’
We did below points to troubleshoot this issue but didn’t get any success :(http://sharepointbloggin.com/2010/12/ and http://sharepointbloggin.com/)
Activated the "SharePoint Server Enterprise Site Collection features" at Site / Web level
Checked Enterprise CAL Licence on the server .
However when we tried to connect another web application in the same farm that is working fine .
Please suggest any workaround if anyone has faced this kind of problem before, we would greatly appreciate his / her inputs.
Reply 1
Hi hemantrhtk,
May be you haven’t activate it on your the top-level site .Go to the top-level site of wherever your site is, go to the Site Settings > Site Collection Features, and ensure the Enterprise Feature is activated here.
This is also may be InfoPath compatibility problems, you can delete the item relate InfoPath , then recreate it on your SharePoint 2010 environment.
Thanks,
Jack
Hello Jack
We have tried this also as per the links shared above.
Reply 2 by http://social.msdn.microsoft.com/profile/shuklabond/?ws=usercard-mini
I need a comparison sheet of SharePoint 2010 with other products with respect to multilingual support . Other products could be : Google Sites, Hyper Office, Documentum, Alfresco , Oracle Beehive, Jive, O3 Spaces, Mindtouch ,Lotus Vignette, Drupal, Salesforce an so on.
Reply 1
I don't think a sheet like that exists. I found a couple of interesting links for you:
Hopefuly they contain useful information for you. One good thing to remember is that users always have to translate content to the other language. All the system settings (such as settings and default columns) are translated. I don't know how to other products work compared to SharePoint.
You also buy a platform with SharePoint so you can use it for lots more then a multilingual solution.
Account permissions and security settings (Office SharePoint Server) - This article describes Microsoft Office SharePoint Server administrative and services account permissions. It covers the following areas: Microsoft SQL Server, the file system, file shares, and registry entries
Decentralizing Site Administration - Security model of WSS 3.0, including its advantages and implications when delegating administrative control over SharePoint sites to individual departments while maintaining centralized administrative control over the SharePoint infrastructure.
ou're never really going to get a unified document as it'll almost certainly end up with things that aren't relevant. Some basic questions would be: -
Are you using Performance Point?
Are you using Standard or Enterprise?
What version of SQL Server do you have?
Are you integrating Reporting Services into your deployment?
Are you using any major third party products from guys like Metalogix, AvePoint etc.
Have you any internal customisations?
All of these would represent deviations from one deployment to the next. You'd be best served reading widely around the current set-up that you have, reading the link that Bogdan provided and looking at MVP/Support Engineer blogs
Steven Andrews
Reply 2
There are thousands of troubleshooting scenarios and articles. Check the blogs of the SharePoint Support Engineers that are working for Microsoft and you will find a lot of information.
Here you have my blog http://sharepointboco.com/ where you can find SharePoint performance troubleshooting tips and tricks
I am using MOSS 2007 SP 2. I enabled anonymous access on the site collection for lists only. When I try to call GetListItems on the lists web service on a list that does not have anonymous enabled using the list GUID in the request I get back 0 items. If I used the list name instead I get back list items. If I disable anonymous access for the site collection I also get back list items. Any idea what's going on?
Reply 1
Wrap your request inside authentication.asmx.
Use LoginResult method and based on the results of this function assign the CookieContainer of authentication web service to the Lists web service.
i want to load js for Google analytics for some site and don't want to load for some site so thinking of doing this using delegate control so users can control through feature activation and deactivation.i did something like this for banner image so will this approch work to load js too ?
if there is other way please advise
Thanks
Ronak
Reply
You can opt for 2 set of master pages. One without scripts included, the other one having the Google Analytic implementation
Reply 2
Guys,
I am trying to add JS in Master page using Delegate Controls but its not working.here is code i have written.
Please advise
using System; using System.Collections.Generic; using System.Linq; using System.Web; using System.Web.UI; using System.Web.UI.WebControls; using Microsoft.SharePoint; namespace WSPBuilderProject1.SharePointRoot.Template.ControlTemplates.PhilaGov { public partial class PhilaGovGA : UserControl { protected void Page_Load(object sender, EventArgs e) { string _url = SPContext.Current.Web.Url; } protected override void CreateChildControls() { base.CreateChildControls(); this.Controls.Add(new LiteralControl(@"<script type='text/javascript'> var _gaq = _gaq || []; _gaq.push(['_setAccount', '#12323232']); _gaq.push(['_trackPageview']); </script>")); }
} }
Once i enable feature i dont see this code in head section of master page.
I am trying to ResolvePrincipal using People.asmx in two seprate farms. Both farms are single server farms with same configuration . Both farms target the same AD for user information. End point of AD being use din both the farms are the same .
But one one server it takes average 3.5 seconds to Run this web service method, but on the other it takes 43 seconds.